{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:06:48.503790Z"
},
"title": "Triplet-Trained Vector Space and Sieve-Based Search Improve Biomedical Concept Normalization",
"authors": [
{
"first": "Dongfang",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona Tucson",
"location": {
"region": "AZ"
}
},
"email": "dongfangxu9@email.arizona.edu"
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona Tucson",
"location": {
"region": "AZ"
}
},
"email": "bethard@email.arizona.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is critical for mining and analyzing biomedical texts. We propose a vector-space model for concept normalization, where mentions and concepts are encoded via transformer networks that are trained via a triplet objective with online hard triplet mining. The transformer networks refine existing pre-trained models, and the online triplet mining makes training efficient even with hundreds of thousands of concepts by sampling training triples within each mini-batch. We introduce a variety of strategies for searching with the trained vector-space model, including approaches that incorporate domain-specific synonyms at search time with no model retraining. Across five datasets, our models that are trained only once on their corresponding ontologies are within 3 points of state-of-the-art models that are retrained for each new domain. Our models can also be trained for each domain, achieving new state-of-the-art on multiple datasets.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is critical for mining and analyzing biomedical texts. We propose a vector-space model for concept normalization, where mentions and concepts are encoded via transformer networks that are trained via a triplet objective with online hard triplet mining. The transformer networks refine existing pre-trained models, and the online triplet mining makes training efficient even with hundreds of thousands of concepts by sampling training triples within each mini-batch. We introduce a variety of strategies for searching with the trained vector-space model, including approaches that incorporate domain-specific synonyms at search time with no model retraining. Across five datasets, our models that are trained only once on their corresponding ontologies are within 3 points of state-of-the-art models that are retrained for each new domain. Our models can also be trained for each domain, achieving new state-of-the-art on multiple datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Concept normalization (aka. entity linking or entity normalization) is a fundamental task of information extraction which aims to map concept mentions in text to standard concepts in a knowledge base or ontology. This task is important for mining and analyzing unstructured text in the biomedical domain as the texts describing biomedical concepts have many morphological and orthographical variations, and utilize different word orderings or equivalent words. For instance, heart attack, coronary attack, MI, myocardial infarction, cardiac infarction, and cardiovascular stroke all refer to the same concept. Linking such terms with their corresponding concepts in an ontology or knowledge base is critical for data interoperability and the development of natural language processing (NLP) techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research on concept normalization has grown thanks to shared tasks such as disorder normalization in the 2013 ShARe/CLEF (Suominen et al., 2013) , chemical and disease normalization in BioCreative V Chemical Disease Relation (CDR) Task , and medical concept normalization in 2019 n2c2 shared task (Henry et al., 2020) , and to the availability of annotated data (Dogan et al., 2014; Luo et al., 2019) . Existing approaches can be divided into three categories: rule-based approaches using string-matching or dictionary look-up (Leal et al., 2015; D'Souza and Ng, 2015; Lee et al., 2016) , which rely heavily on handcrafted rules and domain knowledge; supervised multi-class classifiers (Limsopatham and Collier, 2016; Lee et al., 2017; Tutubalina et al., 2018; Niu et al., 2019; Li et al., 2019) , which cannot generalize to concept types not present in their training data; and two-step frameworks based on a nontrained candidate generator and a supervised candidate ranker (Leaman et al., 2013; Li et al., 2017; Liu and Xu, 2017; Nguyen et al., 2018; Murty et al., 2018; Mondal et al., 2019; Ji et al., 2020; Xu et al., 2020) , which require complex pipelines and fail if the candidate generator does not find the gold truth concept.",
"cite_spans": [
{
"start": 121,
"end": 144,
"text": "(Suominen et al., 2013)",
"ref_id": "BIBREF39"
},
{
"start": 297,
"end": 317,
"text": "(Henry et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 362,
"end": 382,
"text": "(Dogan et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 383,
"end": 400,
"text": "Luo et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 527,
"end": 546,
"text": "(Leal et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 547,
"end": 568,
"text": "D'Souza and Ng, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 569,
"end": 586,
"text": "Lee et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 686,
"end": 717,
"text": "(Limsopatham and Collier, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 718,
"end": 735,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 736,
"end": 760,
"text": "Tutubalina et al., 2018;",
"ref_id": "BIBREF40"
},
{
"start": 761,
"end": 778,
"text": "Niu et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 779,
"end": 795,
"text": "Li et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 975,
"end": 996,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 997,
"end": 1013,
"text": "Li et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 1014,
"end": 1031,
"text": "Liu and Xu, 2017;",
"ref_id": "BIBREF25"
},
{
"start": 1032,
"end": 1052,
"text": "Nguyen et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 1053,
"end": 1072,
"text": "Murty et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 1073,
"end": 1093,
"text": "Mondal et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 1094,
"end": 1110,
"text": "Ji et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 1111,
"end": 1127,
"text": "Xu et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a vector space model for concept normalization, where mentions and concepts are encoded as vectors -via transformer networks trained via a triplet objective with online hard triplet mining -and mentions are matched to concepts by vector similarity. The online hard triplet mining strategy selects the hard positive/negative exemplars from within a mini-batch during training, which ensures consistently increasing difficulty of triplets as the network trains for fast convergence. There are two advantages of applying the vector space model for concept normalization: 1) it is computationally cheap compared with other supervised classification approaches as we only compute the representations for all concepts in ontology once after training the network; 2) it allows concepts and synonyms to be added or deleted after the network is trained, a flexibility that is important for the biomedical domain where frequent updates to ontologies like the Unified Medical Language System (UMLS) Metathesaurus 1 are common. Unlike prior work, our simple and efficient model requires neither negative sampling before the training nor a candidate generator during inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work makes the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a triplet network with online hard triplet mining for training a vector-space model for concept normalization, a simpler and more efficient approach than prior work. \u2022 We propose and explore a variety of strategies for matching mentions to concepts using the vector-space model, with the most successful being a simple sieve-based approach that checks domain-specific synonyms before domain-independent ones. \u2022 Our framework produces models trained on only the ontology -no domain-specific training -that can incorporate domain-specific concept synonyms at search time without retraining, and these models achieve within 3 points of state-of-the-art on five datasets. \u2022 Our framework also allows models to be trained for each domain, achieving state-ofthe-art performance on multiple datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The code for our proposed framework is available at https://github.com/dongfang91/ Triplet-Search-ConNorm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Earlier work on concept normalization focuses on how to use morphological information to conduct lexical look-up and string matching (Kang et al., 2013; D'Souza and Ng, 2015; Leal et al., 2015; Kate, 2016; Lee et al., 2016; Jonnagaddala et al., 2016) . They rely heavily on handcrafted rules and domain knowledge, e.g., D'Souza and Ng (2015) define 10 types of rules at different priority levels to measure morphological similarity between mentions and candidate concepts in the ontologies. The lack of lexical overlap between concept mention and concept in domains like social media, makes rule-based approaches that rely on lexical matching less applicable.",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "(Kang et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 153,
"end": 174,
"text": "D'Souza and Ng, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 175,
"end": 193,
"text": "Leal et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 194,
"end": 205,
"text": "Kate, 2016;",
"ref_id": "BIBREF13"
},
{
"start": 206,
"end": 223,
"text": "Lee et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 224,
"end": 250,
"text": "Jonnagaddala et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Supervised approaches for concept normalization have improved with the availability of annotated data and deep learning techniques. When the number of concepts to be predicted is small, classification-based approaches (Limsopatham and Collier, 2016; Lee et al., 2017; Tutubalina et al., 2018; Niu et al., 2019; Li et al., 2019; Miftahutdinov and Tutubalina, 2019) are often adopted, with the size of the classifier's output space equal to the number of concepts. Approaches differ in neural architectures, such as character-level convolution neural networks (CNN) with multi-task learning (Niu et al., 2019) and pre-trained transformer networks (Li et al., 2019; Miftahutdinov and Tutubalina, 2019) . However, classification approaches struggle when the annotated training data does not contain examples of all concepts -common when there are many concepts in the ontology -since the output space of the classifier will not include concepts absent from the training data.",
"cite_spans": [
{
"start": 218,
"end": 249,
"text": "(Limsopatham and Collier, 2016;",
"ref_id": "BIBREF24"
},
{
"start": 250,
"end": 267,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 268,
"end": 292,
"text": "Tutubalina et al., 2018;",
"ref_id": "BIBREF40"
},
{
"start": 293,
"end": 310,
"text": "Niu et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 311,
"end": 327,
"text": "Li et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 328,
"end": 363,
"text": "Miftahutdinov and Tutubalina, 2019)",
"ref_id": "BIBREF28"
},
{
"start": 589,
"end": 607,
"text": "(Niu et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 645,
"end": 662,
"text": "(Li et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 663,
"end": 698,
"text": "Miftahutdinov and Tutubalina, 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "To alleviate the problems of classification-based approaches, researchers apply learning to rank in concept normalization, a two-step framework including a non-trained candidate generator and a supervised candidate ranker that takes both mention and candidate concept as input. Previous candidate rankers have used point-wise learning to rank (Li et al., 2017) , pair-wise learning to rank (Leaman et al., 2013; Liu and Xu, 2017; Nguyen et al., 2018; Mondal et al., 2019) , and list-wise learning to rank (Murty et al., 2018; Ji et al., 2020; Xu et al., 2020) . These learning to rank approaches also have drawbacks. Firstly, if the candidate generator fails to produce the gold truth concept, the candidate ranker will also fail. Secondly, the training of candidate ranker requires negative sampling beforehand, and it is unclear if these pre-selected negative samples are informative for the whole training process (Hermans et al., 2017; Sung et al., 2020) .",
"cite_spans": [
{
"start": 343,
"end": 360,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 390,
"end": 411,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 412,
"end": 429,
"text": "Liu and Xu, 2017;",
"ref_id": "BIBREF25"
},
{
"start": 430,
"end": 450,
"text": "Nguyen et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 451,
"end": 471,
"text": "Mondal et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 505,
"end": 525,
"text": "(Murty et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 526,
"end": 542,
"text": "Ji et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 543,
"end": 559,
"text": "Xu et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 917,
"end": 939,
"text": "(Hermans et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 940,
"end": 958,
"text": "Sung et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Inspired by Schroff et al. (2015) , we propose a triplet network with online hard triplet mining for concept normalization. Our framework sets up concept normalization as a one-step process, calculating similarity between vector representations of the mention and of all concepts in the ontology. Online hard triplet mining allows such a vector space model to generate triplets of (mention, true concept, false concept) within a mini-batch, leading to efficient training and fast convergence (Schroff et al., 2015) . In contrast with previous vector space models where mention and candidate concepts are mapped to vectors via TF-IDF (Leaman et al., 2013) , TreeLSTMs (Liu and Xu, 2017) , CNNs (Nguyen et al., 2018; Mondal et al., 2019) or ELMO (Schumacher et al., 2020) , we generate vector representations with BERT (Devlin et al., 2019) , since it can encode both surface and semantic information (Ma et al., 2019 ).",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "Schroff et al. (2015)",
"ref_id": "BIBREF36"
},
{
"start": 492,
"end": 514,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF36"
},
{
"start": 633,
"end": 654,
"text": "(Leaman et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 667,
"end": 685,
"text": "(Liu and Xu, 2017)",
"ref_id": "BIBREF25"
},
{
"start": 688,
"end": 714,
"text": "CNNs (Nguyen et al., 2018;",
"ref_id": null
},
{
"start": 715,
"end": 735,
"text": "Mondal et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 744,
"end": 769,
"text": "(Schumacher et al., 2020)",
"ref_id": "BIBREF37"
},
{
"start": 817,
"end": 838,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 899,
"end": 915,
"text": "(Ma et al., 2019",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "There are a few similar works to our vector space model, CNN-triplet (Mondal et al., 2019) , BIOSYN (Sung et al., 2020) , RoBERTa-Node2Vec (Pattisapu et al., 2020) , and TTI (Henry et al., 2020) . CNN-triplet is a two-step approach, requiring a generator to generate candidates for training the triplet network, and requiring various embedding resources as input to CNN-based encoder. BIOSYN, RoBERTa-Node2Vec, and TTI are onestep approaches. BIOSYN requires an iterative candidate retrieval over the entire training data during each training step, requires both BERT-based and TF-IDF-based representations, and performs a variety of pre-processing such as acronym expansion. Both RoBERTa-Node2Vec and TTI use a BERTbased encoder to encode the mention texts into a vector space, but they differ in how to generate vector representations for medical concepts. Specifically, RoBERTa-Node2Vec uses a Node2Vec graph embedding approach to generate concept representations, and fixes such representations during training, while TTI randomly initializes vector representations for concepts, and keeps such representations learnable during training. Note that none of these works explore search strategies that allow domainspecific synonyms to be added without retraining the model, while we do.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Mondal et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 100,
"end": 119,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 139,
"end": 163,
"text": "(Pattisapu et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 174,
"end": 194,
"text": "(Henry et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We define a concept mention m as a text string in a corpus D, and a concept c as a unique identifier in an ontology O. The goal of concept normalization is to find a mapping function f that maps each textual mention to its correct concept, i.e., c = f (m). We define concept text t as a text string denoting the concept c, and t \u2208 T (c), where T (c) is all the concept texts denoting concept c. Concept text may come from an ontology, t \u2208 O(c), where O(c) is the synonyms of the concept c from the ontology O, or from an annotated corpus, t \u2208 D(c), where D(c) is the mentions of the concept c in an annotated corpus D. T (c) will allow the generation of tuples (t, c) such as (MI,C0027051) and (Myocardial Infarction,C0027051). Note that, for a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "tp heart attack BERT encoder V (tp) ti myocardial infarction BERT encoder V (ti) tn cardiovascular infections BERT encoder V (tn) Sip = Sim(V (ti), V (tp)) Sin = Sim(V (ti), V (tn)) L = ln (1 + e (S in \u2212S ip ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "Figure 1: Example of loss calculation for a single instance of triplet-based training. The same BERT model is used for encoding t i , t p , and t n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "concept c, it is common to have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "|O(c)| > |D(c)|, O(c) \u2229 D(c) = \u2205, or even D(c) = \u2205, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": ", it is common for there to be more concept synonyms in the ontology than the annotated corpus, it is common for the ontology and annotated corpus to provide different concept synonyms, and it is common that annotated corpus only covers a small subset of all concepts in an ontology. We implement f as a vector space model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (m) = argmax c\u2208O t\u2208T (c) Sim(V (m), V (t))",
"eq_num": "(1)"
}
],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "where V (x) is a vector representation of text x and Sim is a similarity measure such as cosine similarity, inner product, or euclidean distance. We learn the vector representations V (x) using a triplet network architecture (Hoffer and Ailon, 2015) , which learns from triplets of (anchor text t i , positive text t p , negative text t n ) where t i and t p are texts for the same concept, and t n is a text for a different concept. The triplet network attempts to learn V such that for all training triplets:",
"cite_spans": [
{
"start": 225,
"end": 249,
"text": "(Hoffer and Ailon, 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "Sim(V (t i ), V (t ip )) > Sim(V (t i ), V (t in )) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "The triplet network architecture has been adopted in learning representations for images (Schroff et al., 2015; Gordo et al., 2016 ) and text (Neculoiu et al., 2016; Reimers and Gurevych, 2019) . It consists of three instances of the same sub-network (with shared parameters). When fed a (t i , t ip , t in ) triplet of texts, the sub-network outputs vector representations for each text, which are then fed into a triplet loss. We adopt PubMed-BERT (Gu et al., 2020) as the sub-network, where the representation for the concept text is an average pooling of the representations for all sub-word tokens 2 . This architecture is shown in Figure 1 . The inputs to our model are only the mentions or synonyms. We leave the resolution of ambiguous mentions, which will require exploration of contextual information, for future work.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Schroff et al., 2015;",
"ref_id": "BIBREF36"
},
{
"start": 112,
"end": 130,
"text": "Gordo et al., 2016",
"ref_id": "BIBREF5"
},
{
"start": 142,
"end": 165,
"text": "(Neculoiu et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 166,
"end": 193,
"text": "Reimers and Gurevych, 2019)",
"ref_id": "BIBREF35"
},
{
"start": 450,
"end": 467,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 637,
"end": 645,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed methods",
"sec_num": "3"
},
{
"text": "An essential part of learning using triplet loss is how to generate triplets. As the number of synonyms gets larger, the number of possible triplets grows cubically, making training impractical. We follow the idea of online triplet mining (Schroff et al., 2015) which considers only triplets within a mini-batch. We first feed a mini-batch of b concept texts to the PubMed-BERT encoder to generate a d-dimensional representation for each concept text, resulting in a matrix M \u2208 R b\u00d7d . We then compute the pairwise similarity matrix:",
"cite_spans": [
{
"start": 239,
"end": 261,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = Sim(M, M T )",
"eq_num": "(3)"
}
],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "where each entry S ij corresponds to the similarity score between the i th and j th concept texts in the mini-batch. As the easy triplets would not contribute to the training and result in slower convergence (Schroff et al., 2015) , for each concept text t i , we only select a hard positive t p and a hard negative t n from the mini-batch such that:",
"cite_spans": [
{
"start": 208,
"end": 230,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "p = argmin j\u2208[1,b]:j =i\u2227C(j)=C(i) S ij (4) n = argmax k\u2208[1,b]:k =i\u2227C(k) =C(i) S ik (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "where C(x) is the ontology concept from which t x was taken, i.e., if t x \u2208 T (c) then C(x) = c. We train the triplet network using batch hard soft margin loss (Hermans et al., 2017) :",
"cite_spans": [
{
"start": 160,
"end": 182,
"text": "(Hermans et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(i) = ln (1 + e (S in \u2212S ip ))",
"eq_num": "(6)"
}
],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "where S, n, and p are as in eqs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "(3) to (5), and the hinge function, max(\u2022, 0), in the traditional triplet loss is replaced by a softplus function, ln(1 + e (\u2022) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online hard triplet mining",
"sec_num": "3.1"
},
{
"text": "Once our vector space model has been trained, we consider several options for how to find the most similar concept c to a text mention m. First, we 2 We also experimented with using the output of the CLStoken, and max-pooling of the output representations for the sub-word tokens as proposed by (Reimers and Gurevych, 2019) , but neither resulted in better performance.",
"cite_spans": [
{
"start": 148,
"end": 149,
"text": "2",
"ref_id": null
},
{
"start": 295,
"end": 323,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity search",
"sec_num": "3.2"
},
{
"text": "Representation Type must choose a search target: we can search over the concepts from the ontology, or the training data, or both. Second we must choose a representation type: we can compare m directly to each text (ontology synonym or training data mention) of each concept, or we can calculate a vector representation of each concept and then compare m directly to the concept vector. Table 1 summarizes these options.",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 394,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "Ontology Training Data Text Concept O-T O-C D-T D-C OD-T OD-C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "We consider the following search targets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "Data We search over the concepts in the annotated data. These mentions will be more domainspecific (e.g., PT may refer to patient in clinical notes, but to physical therapy in scientific articles), but may be more predictive if the evaluation data is from the same domains. We search over the train subset of the data for dev evaluation, and train + dev subset for test evaluation. Ontology We search over the concepts in the ontology. The synonyms will be more domainindependent, and the ontology will cover concepts never seen in the annotated training data. Data and ontology We search over the concepts in both the training data and the ontology. For concepts in the annotated training data, their representations are averaged over mentions in the training data and synonyms in the ontology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "We consider the following representation types:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "Text We represent each text (ontology synonym or training data mention) as a vector by running it through our triplet-fine-tuned PubMed-BERT encoder. Concept normalization then compares the mention vector to each text vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (m) = argmax c\u2208O t\u2208T (c) Sim(V (m), V (t))",
"eq_num": "(7)"
}
],
"section": "Searching Over",
"sec_num": null
},
{
"text": "When a retrieved text t is present in more than one concept (e.g., no appetite appears in concepts C0426579, C0003123, C1971624), and thus we see the same Sim for multiple concepts, we pick a concept randomly to break ties. Concept We represent each concept as a vector by taking an average over the triplet-fine-tuned PubMed-BERT representations of that concept's texts (ontology synonyms and/or training data mentions). Concept normalization then compares the mention vector to each concept vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "First component Second component D-T O-T D-T O-C D-C O-T D-C O-C D-T OD-T D-T OD-C D-C OD-T D-C OD-C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (m) = argmax c\u2208O Sim V (m), mean t\u2208T (c) V (t)",
"eq_num": "(8)"
}
],
"section": "Searching Over",
"sec_num": null
},
{
"text": "The averages here mean that different concepts with some (but not all) overlapping synonyms (e.g., C0426579, C0003123, C1971624 in UMLS all have the synonym no appetite) will end up with different vector representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching Over",
"sec_num": null
},
{
"text": "Traditional sieve-based approaches for concept normalization (D'Souza and Ng, 2015; Jonnagaddala et al., 2016; Luo et al., 2019; Henry et al., 2020) achieved competitive performance by ordering a sequence of searches over dictionaries from most precise to least precise. Inspired by this work, we consider a sieve-based similarity search that: 1) searches over the annotated training data, then 2) searches over the ontology (possibly combined with the annotated training data). Table 2 lists all possible combinations of first and second components in sieve-based search. For instance, in sieve-based search D-T + O-C, we first search over the annotated corpus using trainingdata-mention vectors (D-T), and then search over the ontology using concept vectors (O-C).",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(D'Souza and Ng, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 84,
"end": 110,
"text": "Jonnagaddala et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 111,
"end": 128,
"text": "Luo et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 129,
"end": 148,
"text": "Henry et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 479,
"end": 486,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sieve-based search",
"sec_num": "3.2.1"
},
{
"text": "We conduct experiments on three scientific article datasets -NCBI (Dogan et al., 2014), BC5CDR-D and BC5CDR-C (Li et al., 2016) -and two clinical note datasets -MCN (Luo et al., 2019) and ShARe/CLEF (Suominen et al., 2013) . The statistics of each dataset are described in table 3.",
"cite_spans": [
{
"start": 165,
"end": 183,
"text": "(Luo et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 199,
"end": 222,
"text": "(Suominen et al., 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "NCBI The NCBI disease corpus 3 contains 17,324 manually annotated disorder mentions from 792 PubMed abstracts. The disorder mentions are mapped to 750 MEDIC lexicon (Davis et al., 2012) concepts. We split the released training set into use 5,134 training mentions and 787 development mentions, and keep the 960 mentions from the original test set as evaluation. We use the 2012 version of MEDIC ontology which contains 11,915 concepts and 71,923 synonyms. We take 40 clinical notes from the released data as training, consisting of 5,334 mentions, and the standard evaluation data with 6,925 mentions as our test set. Around 2.7% of mentions in MCN are assigned the CUI-less label.",
"cite_spans": [
{
"start": 165,
"end": 185,
"text": "(Davis et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Unless specifically noted otherwise, we use the same training procedure and hyper-parameter settings across all experiments and on all datasets. As the triplet mining requires at least one positive text in a batch for each anchor text, we randomly sample one positive text for each anchor text and group them into batches. Like previous work (Schroff et al., 2015; Hermans et al., 2017) , we adopt euclidean distance to calculate similarity score during training, while at inference time, we compute cosine similarity as it is simpler to interpret. For the sieve-based search, if the cosine similarity score between the mention and the prediction of the first sieve is above 0.95, we use the prediction of first sieve, otherwise, we use the prediction of the second sieve. When training the triplet network on the combination of the ontology and annotated corpus, we take all the synonyms from the ontology and repeat the concept texts in the annotated corpus such that",
"cite_spans": [
{
"start": 342,
"end": 364,
"text": "(Schroff et al., 2015;",
"ref_id": "BIBREF36"
},
{
"start": 365,
"end": 386,
"text": "Hermans et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "|D| |O| = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "3 . In preliminary experiments we found that large ontologies overwhelmed small annotated corpora. We also experimented with three ratios 1 3 , 2 3 , and 1 between concept texts and synonyms of ontology on NCBI and BC5CDR-D datasets, and found that the ratio of 1 3 achieves the best performance for Train:OD models. We then kept the same ratio setting for all datasets. We did not thoroughly explore other ratios and leave that to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "For all experiments, we use PubMed-BERT (Gu et al., 2020) as the starting point, which pre-trains a BERT-style model from scratch on PubMed abstracts and full texts. In our preliminary experi-ments, we also tried BioBERT as the text encoder, but that resulted in worse performance across five datasets. We use the pytorch implementation of sentence-transformers 7 to train the Triplet Network for concept normalization. We use the following hyper-parameters during the training of the triplet network: sequence_length = 8, batch_size = 1500, epoch_size = 100, optimizer = Adam, learning_rate = 3e-5, warmup_steps = 0.",
"cite_spans": [
{
"start": 40,
"end": 57,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "The standard evaluation metric for concept normalization is accuracy, because the most similar concept in prediction is of primary interest. For composite mentions like breast and ovarian cancer that are mapped to more than one concept in NCBI, BC5CDR-D, and BC5CDR-C datasets, we adopt the evaluation strategy that composite entity is correct if every prediction for each separate mention is correct (Sung et al., 2020) .",
"cite_spans": [
{
"start": 401,
"end": 420,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "4.3"
},
{
"text": "We use the development data to choose whether to train the triplet network on just the ontology or also the training data, and to choose which among the similarity search strategies described in section 3.2. Table 4 shows the performance of all such systems across the five different corpora. The top half of the table focuses on settings where the triplet network only needs to be trained once, on the ontology, and the bottom half focuses on settings where the triplet network is retrained for each new dataset. For each half of the table, the last column gives the average of the ranks of each setting's performance across the five corpora. For example, when training the triplet network only on the ontology, the searching strategy D-C (search the training data using concept vectors) is almost always the worst performing, Table 4 : Dev performances of the triplet network trained on ontology and ontology + data with different similarity search strategies. The last column Avg. Rank shows the average rank of each similarity search strategy across multiple datasets. Models with best average rank are highlighted in grey; models with best accuracy are bolded.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 828,
"end": 835,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model selection",
"sec_num": "5"
},
{
"text": "ranking 14th of 14 in four corpora and 12th of 14 in one corpus, for an average rank of 13.6. Table 4 shows that the best models search over both the ontology and the training data. Models that only search over the training data (D-T and D-C) perform worst, with average ranks of 12.6 or higher regardless of what the triplet network is trained on, most likely because the training data covers only a fraction of the concepts in the test data. Models that only search over the ontology (O-T and O-C) are only slightly better, with average ranks between 9.6 and 12, though the models in the first two rows of the table at least have the advantage that they require no annotated training data (they train on and search over only the ontology). However, the performance of such models can be improved by adding domain-specific synonyms to the ontology, i.e., OD-T vs. O-T (rows 5 vs. 1), and OD-C vs. O-C (rows 6 vs. 2), or adding domain-specific synonyms and then searching in a sieve-based manner (rows 7-14) . Table 4 also shows that searching based on text (ontology synonyms or training data mentions) vectors typically outperforms searching based on con-cept (average of text) vectors. Each pair of rows in the table shows such a comparison, and only in rows 15-16 and 19-20 are the average ranks of the -C models higher than the -T models. Table 4 also shows that models using mixed representation types (-T and -C) have worse ranks than the text-only models (-T). For instance, going from Train:O-Search:O-C to Train:O-Search:O-T improves the average rank from 12 to 10.2, going from Train:OD-Search:D-T+OD-C to Train:OD-Search:D-T+OD-T improves the average rank from 5.2 to 2.4, etc. There are a few exceptions to this on the MCN dataset. We analyzed the differences in the predictions of Train:OD-Search:D-T+OD-T (row 25) and Train:OD-Search:D-T+OD-C (row 26) on this dataset, and found that concept vectors sometimes helps to solve ambiguous mentions by averaging their concept texts. For instance, the OD-T model finds concepts C0013144 and C2830004 for mention somnolent as they have the overlapping synonym somnolent, while the OD-C model ranks C2830004 higher as the other concept also has other synonyms such as Drowsy, Sleepiness.",
"cite_spans": [
{
"start": 996,
"end": 1007,
"text": "(rows 7-14)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1010,
"end": 1017,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1344,
"end": 1351,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model selection",
"sec_num": "5"
},
{
"text": "Finally, 88.80 88.9 94.1 --CNN-based ranking (Li et al., 2017) 86.10 --90.30 -BERT-based ranking (Ji et al., 2020) 89.06 --91.10 -BERT-based ranking (Xu et al., 2020) ----83.56 BIOSYN (Sung et al., 2020) 91 outperform their non-sieve-based counterparts. For example, D-T + O-T has better average ranks than O-T, D-T, or 3, and 5; and rows 21 vs. 15, 17, and 19) . From this analysis on the dev set, we select the following models to evaluate on the test set: Train:O + Search:O-T This is the best approach that requires only the ontology; no annotated training data is used. Train:O + Search:D-T+OD-T This is the best approach that only needs to be trained once (on the ontology), as the training data is only used to add extra concept text during search time. This is similar to a real-world scenario where a user manually adds some extra domain-specific synonyms for concepts they care about. Train:OD + Search:D-T+OD-T This is the best approach that can be created from any combination of ontology and training data. The triplet network must be retrained for each new domain. Train:OD + Search:tuned This is the bold models in the second half of table 4. It requires not only retraining the triplet network for each new domain, but also trying out all search strategies on the new domain and selecting the best one. (p=0) . Note that the performance of TTI is from an ensemble of multiple system runs. Yet this model is simpler than most prior work: it requires no two-step generate-and-rank framework (Li et al., 2017; Ji et al., 2020; Xu et al., 2020) , no iterative candidate retrieval over the entire training data (Sung et al., 2020) , no hand-crafted rules or features (D'Souza and Ng, 2015; Luo et al., 2019) , and no acronym expansion or TF-IDF transformations (D'Souza and Ng, 2015; Ji et al., 2020; Sung et al., 2020) . The PubMed-BERT rows in Table 5 demonstrate that the triplet training is a critical part of the success: if we use PubMed-BERT without triplet training, performance is 2 to 8 points worse than our best models, depending on the dataset. Yet, we can see that our proposed search strategies are also important, as on the BC5CDR datasets, PubMed-BERT can get within 3 points of the state-of-the-art using the D-T+OD-T search strategy (though it is much further away on the other datasets).",
"cite_spans": [
{
"start": 45,
"end": 62,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 97,
"end": 114,
"text": "(Ji et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 149,
"end": 166,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 184,
"end": 203,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 320,
"end": 322,
"text": "3,",
"ref_id": null
},
{
"start": 323,
"end": 329,
"text": "and 5;",
"ref_id": null
},
{
"start": 330,
"end": 349,
"text": "and rows 21 vs. 15,",
"ref_id": null
},
{
"start": 350,
"end": 353,
"text": "17,",
"ref_id": null
},
{
"start": 354,
"end": 361,
"text": "and 19)",
"ref_id": null
},
{
"start": 1319,
"end": 1324,
"text": "(p=0)",
"ref_id": null
},
{
"start": 1505,
"end": 1522,
"text": "(Li et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 1523,
"end": 1539,
"text": "Ji et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 1540,
"end": 1556,
"text": "Xu et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 1622,
"end": 1641,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 1678,
"end": 1700,
"text": "(D'Souza and Ng, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 1701,
"end": 1718,
"text": "Luo et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 1772,
"end": 1794,
"text": "(D'Souza and Ng, 2015;",
"ref_id": "BIBREF4"
},
{
"start": 1795,
"end": 1811,
"text": "Ji et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 1812,
"end": 1830,
"text": "Sung et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 1857,
"end": 1864,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Model selection",
"sec_num": "5"
},
{
"text": "Perhaps most interestingly, our triplet network trained only on the ontology and no annotated training data, Train:O+Search:D-T+OD-T, achieves within 3 points of state-of-the-art on all datasets. We believe this represents a more realistic scenario: unlike prior work, our triplet network does not need to be retrained for each new dataset/domain if their concepts are from the same ontology. Instead, the model can be adapted to a new dataset/domain by simply pointing out any extra domain-specific synonyms for concepts, and the search can integrate these directly. Domain-specific synonyms do Table 6 : Top similar texts, their concepts, and similarity scores for mention primary HPT (D049950) predicted from models PubMed-BERT + Search:OD-T, Train:O + Search:OD-T and Train:OD + Search:OD-T.",
"cite_spans": [],
"ref_spans": [
{
"start": 596,
"end": 603,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "seem to be necessary for all datasets; without them (i.e., Train:O+Search:O-T), performance is about 10 points below state-of-the-art. As a small qualitative analysis of the models, Table 6 shows an example of similarity search results, where the systems have been asked to normalize the mention primary HPT. PubMed-BERT fails, producing unrelated acronyms, while both triplet network models find the concept and rank it with the highest similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Our ability to normalize polysemous concept mentions is limited by their context-independent representations. Although our PubMed-BERT encoder is a pre-trained contextual model, we feed in only the mention text, not any context, when producing a representation vector. This is not ideal for mentions with multiple meanings, e.g., potassium in clinical notes may refer to the substance (C0032821) or the measurement (C0202194), and only the context will reveal which one. A better strategy to generate the contextualized representation for the concept mention, e.g., Schumacher et al. (2020) , may yield improvements for such mentions.",
"cite_spans": [
{
"start": 566,
"end": 590,
"text": "Schumacher et al. (2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and future research",
"sec_num": "7"
},
{
"text": "We currently train a separate triplet network for each ontology (one for MEDIC, one for CTD, one for SNOMED-CT, etc.) but in the future we would like to train on a comprehensive ontology like the UMLS Metathesaurus (Bodenreider, 2004) , which includes nearly 200 different vocabularies (SNOMED-CT, MedDRA, RxNorm, etc.), and more than 3.5 million concepts. We expect such a general vector space model would be more broadly useful to the biomedical NLP community.",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and future research",
"sec_num": "7"
},
{
"text": "We explored one type of triplet training network, but in the future we would like to explore other variants, such as semi-hard triplet mining (Schroff et al., 2015) for generating samples, cosine similarity for measuring the similarity during training and inference, and multi-similarity loss for calculating the loss.",
"cite_spans": [
{
"start": 142,
"end": 164,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and future research",
"sec_num": "7"
},
{
"text": "We presented a vector-space framework for concept normalization, based on pre-trained transformers, a triplet objective with online hard triplet mining, and a new approach to vector similarity search. Across five datasets, our models that require only an ontology to train are competitive with state-of-the-art models that require domain-specific training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "https://www.nlm.nih.gov/research/ umls/knowledge_sources/metathesaurus/ index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ncbi.nlm.nih.gov/ CBBresearch/Dogan/DISEASE/ 4 https://biocreative.bioinformatics. udel.edu/tasks/biocreative-v/ 5 https://sites.google.com/site/ shareclefehealth/data 6 https://n2c2.dbmi.hms.harvard.edu/ track3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/UKPLab/ sentence-transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used a one-sample bootstrap resampling test. The one sample is 10,000 runs of bootstrapping results of our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Research reported in this publication was supported by the National Library of Medicine and the National Institute of General Medical Sciences of the National Institutes of Health under Award Numbers R01LM012918 and R01GM114355. The computations were done in systems supported by the National Science Foundation under Grant No. 1228509. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Unified Medical Language System (UMLS): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic Acids Research",
"volume": "32",
"issue": "suppl_1",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The Unified Med- ical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Research, 32(suppl_1):D267-D270.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Medic: a practical disease vocabulary used at the comparative toxicogenomics database",
"authors": [
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Michael C Rosenstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
}
],
"year": 2012,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Peter Davis, Thomas C Wiegers, Michael C Rosenstein, and Carolyn J Mattingly. 2012. Medic: a practical disease vocabulary used at the compara- tive toxicogenomics database. Database, 2012.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ncbi disease corpus: a resource for disease name recognition and concept normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of biomedical informatics",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sieve-based entity linking for the biomedical domain",
"authors": [
{
"first": "D'",
"middle": [],
"last": "Jennifer",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "297--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer D'Souza and Vincent Ng. 2015. Sieve-based entity linking for the biomedical domain. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 297- 302, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep image retrieval: Learning global representations for image search",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Gordo",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Almaz\u00e1n",
"suffix": ""
},
{
"first": "Jerome",
"middle": [],
"last": "Revaud",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Larlus",
"suffix": ""
}
],
"year": 2016,
"venue": "European conference on computer vision",
"volume": "",
"issue": "",
"pages": "241--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert Gordo, Jon Almaz\u00e1n, Jerome Revaud, and Di- ane Larlus. 2016. Deep image retrieval: Learning global representations for image search. In Euro- pean conference on computer vision, pages 241-257. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.15779"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedi- cal natural language processing. arXiv preprint arXiv:2007.15779.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The 2019 National Natural language processing (NLP) Clinical Challenges (n2c2)/Open Health NLP (OHNLP) shared task on clinical concept normalization for clinical records",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "Yanshan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Feichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of the American Medical Informatics Association",
"volume": "27",
"issue": "10",
"pages": "1529--1537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Henry, Yanshan Wang, Feichen Shen, and Ozlem Uzuner. 2020. The 2019 National Natural language processing (NLP) Clinical Challenges (n2c2)/Open Health NLP (OHNLP) shared task on clinical con- cept normalization for clinical records. Journal of the American Medical Informatics Association, 27(10):1529-1537.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "defense of the triplet loss for person reidentification",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Hermans",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Beyer",
"suffix": ""
},
{
"first": "Bastian",
"middle": [],
"last": "Leibe",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.07737"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Hermans, Lucas Beyer, and Bastian Leibe. 2017. In defense of the triplet loss for person re- identification. arXiv preprint arXiv:1703.07737.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep metric learning using triplet network",
"authors": [
{
"first": "Elad",
"middle": [],
"last": "Hoffer",
"suffix": ""
},
{
"first": "Nir",
"middle": [],
"last": "Ailon",
"suffix": ""
}
],
"year": 2015,
"venue": "International Workshop on Similarity-Based Pattern Recognition",
"volume": "",
"issue": "",
"pages": "84--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pages 84-92. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bertbased ranking for biomedical entity normalization",
"authors": [
{
"first": "Zongcheng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "AMIA Summits on Translational Science Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zongcheng Ji, Qiang Wei, and Hua Xu. 2020. Bert- based ranking for biomedical entity normalization. AMIA Summits on Translational Science Proceed- ings, 2020:269.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion",
"authors": [
{
"first": "Jitendra",
"middle": [],
"last": "Jonnagaddala",
"suffix": ""
},
{
"first": "Toni",
"middle": [
"Rose"
],
"last": "Jue",
"suffix": ""
},
{
"first": "Nai-Wen",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hong-Jie",
"middle": [],
"last": "Dai",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jitendra Jonnagaddala, Toni Rose Jue, Nai-Wen Chang, and Hong-Jie Dai. 2016. Improving the dictionary lookup approach for disease normalization using en- hanced dictionary and query expansion. Database, 2016:baw112.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using rule-based natural language processing to improve disease normalization in biomedical text",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Bharat",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Zubair",
"middle": [],
"last": "Afzal",
"suffix": ""
},
{
"first": "Erik",
"middle": [
"M"
],
"last": "Van Mulligen",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Kors",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association",
"volume": "20",
"issue": "5",
"pages": "876--881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ning Kang, Bharat Singh, Zubair Afzal, Erik M van Mulligen, and Jan A Kors. 2013. Using rule-based natural language processing to improve disease nor- malization in biomedical text. Journal of the Amer- ican Medical Informatics Association, 20(5):876- 881.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Normalizing clinical terms using learned edit distance patterns",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rohit",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kate",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of the American Medical Informatics Association",
"volume": "23",
"issue": "2",
"pages": "380--386",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit J. Kate. 2016. Normalizing clinical terms using learned edit distance patterns. Journal of the Amer- ican Medical Informatics Association, 23(2):380- 386.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ULisboa: Recognition and normalization of medical concepts",
"authors": [
{
"first": "Andr\u00e9",
"middle": [],
"last": "Leal",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Couto",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "406--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Leal, Bruno Martins, and Francisco Couto. 2015. ULisboa: Recognition and normalization of medical concepts. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 406-411, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "DNorm: disease name normalization with pairwise learning to rank",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Bioinformatics",
"volume": "29",
"issue": "22",
"pages": "2909--2917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. DNorm: disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Taggerone: joint named entity recognition and normalization with semi-markov models",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "18",
"pages": "2839--2846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman and Zhiyong Lu. 2016. Tag- gerone: joint named entity recognition and normal- ization with semi-markov models. Bioinformatics, 32(18):2839-2846.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "tmchem: a high performance approach for chemical named entity recognition and normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of cheminformatics",
"volume": "7",
"issue": "S1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Chih-Hsuan Wei, and Zhiyong Lu. 2015. tmchem: a high performance approach for chemical named entity recognition and normaliza- tion. Journal of cheminformatics, 7(S1):S3.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "AuDis: an automatic CRF-enhanced disease normalization in biomedical text",
"authors": [
{
"first": "Hsin-Chun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yi-Yu",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsin-Chun Lee, Yi-Yu Hsu, and Hung-Yu Kao. 2016. AuDis: an automatic CRF-enhanced disease nor- malization in biomedical text. Database, 2016. Baw091.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics. Btz682",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. Btz682.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Medical Concept Normalization for Online User-Generated Texts",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Healthcare Informatics (ICHI)",
"volume": "",
"issue": "",
"pages": "462--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy Lee, Sadid A. Hasan, Oladimeji Farri, Alok Choudhary, and Ankit Agrawal. 2017. Medical Con- cept Normalization for Online User-Generated Texts. In 2017 IEEE International Conference on Health- care Informatics (ICHI), pages 462-469. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yonghao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Weisong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bhanu Pratap Singh",
"middle": [],
"last": "Rawat",
"suffix": ""
},
{
"first": "Pengshan",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "JMIR Med Inform",
"volume": "7",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Li, Yonghao Jin, Weisong Liu, Bhanu Pratap Singh Rawat, Pengshan Cai, and Hong Yu. 2019. Fine- Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large- Scale Electronic Health Record Notes: An Empiri- cal Study. JMIR Med Inform, 7(3):e14830.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Cnn-based ranking for biomedical entity normalization",
"authors": [
{
"first": "Haodi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Baohua",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "BMC bioinformatics",
"volume": "18",
"issue": "11",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haodi Li, Qingcai Chen, Buzhou Tang, Xiaolong Wang, Hua Xu, Baohua Wang, and Dong Huang. 2017. Cnn-based ranking for biomedical entity nor- malization. BMC bioinformatics, 18(11):79-86.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robin",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Leaman",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Davis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Normalising medical concepts in social media texts by learning semantic representation",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1014--1023",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2016. Normalis- ing medical concepts in social media texts by learn- ing semantic representation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1014-1023, Berlin, Germany. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Deep Learning Way for Disease Name Representation and Normalization",
"authors": [
{
"first": "Hongwei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "151--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongwei Liu and Yun Xu. 2017. A Deep Learning Way for Disease Name Representation and Normal- ization. In Natural Language Processing and Chi- nese Computing, pages 151-157. Springer Interna- tional Publishing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "MCN: A comprehensive corpus for medical concept normalization",
"authors": [
{
"first": "Yen-Fu",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Weiyi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Biomedical Informatics",
"volume": "",
"issue": "",
"pages": "103--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yen-Fu Luo, Weiyi Sun, and Anna Rumshisky. 2019. MCN: A comprehensive corpus for medical concept normalization. Journal of Biomedical Informatics, pages 103-132.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Ramesh Nallapati, and Bing Xiang",
"authors": [
{
"first": "Xiaofei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Universal text representation from bert: An empirical study",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.07973"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallap- ati, and Bing Xiang. 2019. Universal text represen- tation from bert: An empirical study. arXiv preprint arXiv:1910.07973.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep neural models for medical concept normalization in user-generated texts",
"authors": [
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "393--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zulfat Miftahutdinov and Elena Tutubalina. 2019. Deep neural models for medical concept normal- ization in user-generated texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Work- shop, pages 393-399, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Amitava Bhattacharyya, and Mahanandeeshwar Gattu",
"authors": [
{
"first": "Ishani",
"middle": [],
"last": "Mondal",
"suffix": ""
},
{
"first": "Sukannya",
"middle": [],
"last": "Purkayastha",
"suffix": ""
},
{
"first": "Sudeshna",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jitesh",
"middle": [],
"last": "Pillai",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishani Mondal, Sukannya Purkayastha, Sudeshna Sarkar, Pawan Goyal, Jitesh Pillai, Amitava Bhat- tacharyya, and Mahanandeeshwar Gattu. 2019. Medical entity linking using triplet network. In Pro- ceedings of the 2nd Clinical Natural Language Pro- cessing Workshop, pages 95-100, Minneapolis, Min- nesota, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Hierarchical losses and new resources for fine-grained entity typing and linking",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Radovanovic",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "97--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hier- archical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 97-109, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning text similarity with Siamese recurrent networks",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Neculoiu",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Versteegh",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Rotaru",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "148--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Neculoiu, Maarten Versteegh, and Mihai Rotaru. 2016. Learning text similarity with Siamese re- current networks. In Proceedings of the 1st Work- shop on Representation Learning for NLP, pages 148-157, Berlin, Germany. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Disease Named Entity Normalization Using Pairwise Learning To Rank and Deep Learning",
"authors": [
{
"first": "Minh",
"middle": [
"Trang"
],
"last": "Thanh Ngan Nguyen",
"suffix": ""
},
{
"first": "Thanh",
"middle": [
"Hai"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh Ngan Nguyen, Minh Trang Nguyen, and Thanh Hai Dang. 2018. Disease Named Entity Nor- malization Using Pairwise Learning To Rank and Deep Learning. Technical report, VNU University of Engineering and Technology.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multi-task Character-Level Attentional Networks for Medical Concept Normalization",
"authors": [
{
"first": "Jinghao",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Yehui",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Siheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhengya",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wensheng",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Neural Process Lett",
"volume": "49",
"issue": "3",
"pages": "1239--1256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinghao Niu, Yehui Yang, Siheng Zhang, Zhengya Sun, and Wensheng Zhang. 2019. Multi-task Character- Level Attentional Networks for Medical Concept Normalization. Neural Process Lett, 49(3):1239- 1256.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Medical Concept Normalization by Encoding Target Knowledge",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Pattisapu",
"suffix": ""
},
{
"first": "Sangameshwar",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Palshikar",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Machine Learning for Health NeurIPS Workshop",
"volume": "116",
"issue": "",
"pages": "246--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Pattisapu, Sangameshwar Patil, Girish Palshikar, and Vasudeva Varma. 2020. Medical Concept Nor- malization by Encoding Target Knowledge. In Proceedings of the Machine Learning for Health NeurIPS Workshop, volume 116 of Proceedings of Machine Learning Research, pages 246-259. PMLR.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Facenet: A unified embedding for face recognition and clustering",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Schroff",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Kalenichenko",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Philbin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "815--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Clinical concept linking with contextualized neural representations",
"authors": [
{
"first": "Elliot",
"middle": [],
"last": "Schumacher",
"suffix": ""
},
{
"first": "Andriy",
"middle": [],
"last": "Mulyar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8585--8592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elliot Schumacher, Andriy Mulyar, and Mark Dredze. 2020. Clinical concept linking with contextualized neural representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 8585-8592, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Biomedical entity representations with synonym marginalization",
"authors": [
{
"first": "Mujeen",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Hwisang",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3641--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mujeen Sung, Hwisang Jeon, Jinhyuk Lee, and Jaewoo Kang. 2020. Biomedical entity representations with synonym marginalization. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 3641-3650, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Overview of the share/clef ehealth evaluation lab 2013",
"authors": [
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": ""
},
{
"first": "Sanna",
"middle": [],
"last": "Salanter\u00e4",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"L"
],
"last": "South",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mowery",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "212--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna Suominen, Sanna Salanter\u00e4, Sumithra Velupil- lai, Wendy W Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R South, Danielle L Mowery, Gareth JF Jones, et al. 2013. Overview of the share/clef ehealth evaluation lab 2013. In Inter- national Conference of the Cross-Language Evalu- ation Forum for European Languages, pages 212- 231. Springer.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Medical concept normalization in social media posts with recurrent neural networks",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Nikolenko",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Biomedical Informatics",
"volume": "84",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Tutubalina, Zulfat Miftahutdinov, Sergey Nikolenko, and Valentin Malykh. 2018. Medical concept normalization in social media posts with recurrent neural networks. Journal of Biomedical Informatics, 84:93-102.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Multi-similarity loss with general pair weighting for deep metric learning",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xintong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Weilin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dengke",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Scott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R. Scott. 2019. Multi-similarity loss with general pair weighting for deep metric learn- ing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Overview of the biocreative v chemical disease relation (cdr) task",
"authors": [
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Mattingly",
"suffix": ""
},
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the fifth BioCreative challenge evaluation workshop",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Hsuan Wei, Yifan Peng, Robert Leaman, Al- lan Peter Davis, Carolyn J Mattingly, Jiao Li, Thomas C Wiegers, and Zhiyong Lu. 2015. Overview of the biocreative v chemical disease re- lation (cdr) task. In Proceedings of the fifth BioCre- ative challenge evaluation workshop, volume 14.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "2020. A generate-and-rank framework with semantic type regularization for biomedical concept normalization",
"authors": [
{
"first": "Dongfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8452--8464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongfang Xu, Zeyu Zhang, and Steven Bethard. 2020. A generate-and-rank framework with semantic type regularization for biomedical concept normalization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8452-8464, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"num": null,
"text": "Names for similarity search modules.",
"content": "
",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"text": "Options for components in sieve-based search.",
"content": "",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Statistics of the five datasets in our experiments.",
"content": "",
"type_str": "table"
},
"TABREF5": {
"html": null,
"num": null,
"text": "",
"content": "shows that sieve-based models |
",
"type_str": "table"
},
"TABREF7": {
"html": null,
"num": null,
"text": "Comparisons of our proposed approaches against the current state-of-the-art performances on NCBI, BC5CDR-D, BC5CDR-C, ShARe/CLEF, and MCN datasets. Approaches with best accuracy are bolded.",
"content": "",
"type_str": "table"
},
"TABREF8": {
"html": null,
"num": null,
"text": "",
"content": "shows the results of our selected mod- |
els on the test set, alongside the best models |
in the literature. Our Train:OD+Search:tuned |
model achieves new state-of-the-art on BC5CDR- |
C (p 8 =0.0291), equivalent performance on NCBI |
",
"type_str": "table"
}
}
}
}