ACL-OCL / Base_JSON /prefixI /json /insights /2022.insights-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
70.7 kB
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:14.235673Z"
},
"title": "Evaluating Biomedical Word Embeddings for Vocabulary Alignment at Scale in the UMLS Metathesaurus Using Siamese Networks",
"authors": [
{
"first": "Goonmeet",
"middle": [],
"last": "Bajaj",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
},
{
"first": "Vinh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Medicine",
"location": {}
},
"email": "vinh.nguyen@nih.gov"
},
{
"first": "Thilini",
"middle": [],
"last": "Wijesiriwardene",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of South Carolina",
"location": {}
},
"email": "thilini@sc.edu"
},
{
"first": "Hong",
"middle": [
"Yung"
],
"last": "Yip",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of South Carolina",
"location": {}
},
"email": "hyip@email.sc.edu"
},
{
"first": "Vishesh",
"middle": [],
"last": "Javangula",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Washington University",
"location": {}
},
"email": "visheshj123@gwu.edu"
},
{
"first": "Srinivasan",
"middle": [],
"last": "Parthasarathy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": "parthasarathy.2@osu.edu"
},
{
"first": "Amit",
"middle": [],
"last": "Sheth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of South Carolina",
"location": {}
},
"email": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Library of Medicine",
"location": {}
},
"email": "obodenreider@mail.nih.gov"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work uses a Siamese Network, initialized with BioWordVec embeddings (distributed word embeddings), for predicting synonymy among biomedical terms to automate a part of the UMLS (Unified Medical Language System) Metathesaurus construction process. We evaluate the use of contextualized word embeddings extracted from nine different biomedical BERT-based models for synonymy prediction in the UMLS by replacing BioWordVec embeddings with embeddings extracted from each biomedical BERT model using different feature extraction methods. Surprisingly, we find that Siamese Networks initialized with BioWordVec embeddings still outperform the Siamese Networks initialized with embedding extracted from biomedical BERT model.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work uses a Siamese Network, initialized with BioWordVec embeddings (distributed word embeddings), for predicting synonymy among biomedical terms to automate a part of the UMLS (Unified Medical Language System) Metathesaurus construction process. We evaluate the use of contextualized word embeddings extracted from nine different biomedical BERT-based models for synonymy prediction in the UMLS by replacing BioWordVec embeddings with embeddings extracted from each biomedical BERT model using different feature extraction methods. Surprisingly, we find that Siamese Networks initialized with BioWordVec embeddings still outperform the Siamese Networks initialized with embedding extracted from biomedical BERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The UMLS (Bodenreider, 2004 ) is a biomedical terminology integration system that includes over 200 source vocabularies 1 . The UMLS Metathesaurus construction process organizes synonymous terms from these source vocabularies into concepts. The current Metathesaurus construction process uses a lexical similarity model and semantic preprocessing to determine synonymy, followed by a human review. The large scale and diversity of the Metathesaurus make the construction process very challenging, tedious, and error-prone. Therefore, to assist the UMLS Metathesaurus construction process, Nguyen et al. introduced the UMLS Vocabulary Alignment (UVA) task, or synonymy prediction task . They designed and train a Siamese Network to predict if two UMLS atoms are synonymous. The Siamese Network is initialized using BioWordVec embeddings, learned using fastText (Bojanowski et al., 2017) . Given the recent successful use of contextualized word embeddings, extracted from Transformer models, for different downstream NLP tasks (Devlin et al., 2019; Vaswani et al., 2017; , we explore the use of contextualized embeddings extracted from several distinct biomedical BERT-based language models. Objectives. 1) Find which type of word embeddings, including contextualized embeddings, achieves the best performance when used with the Siamese Network for the synonymy prediction (or UVA) task. 2) Find which feature extraction method works best to extract word embeddings from the biomedical BERT models for optimal performance. 3) Find the best hyperparameters and optimization of the prediction task to train the Siamese Networks for the UVA task.",
"cite_spans": [
{
"start": 9,
"end": 27,
"text": "(Bodenreider, 2004",
"ref_id": "BIBREF2"
},
{
"start": 860,
"end": 885,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 1025,
"end": 1046,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 1047,
"end": 1068,
"text": "Vaswani et al., 2017;",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Approach. 1) We analyze the performance of the Siamese Networks initialized with embeddings from nine different biomedical BERT models for synonymy prediction. 2) We explore different feature extraction techniques to extract BERT embeddings. 3) We conduct a grid search and optimization of the prediction task to train the Siamese Networks. Contributions. 1) We conduct an extensive analysis to extract embeddings from nine different biomedical BERT models using four feature extraction techniques. 2) Somewhat surprisingly, we find that Siamese Networks still achieve the highest performance for synonymy prediction when initialized with BioWordVec embeddings. 3) We find that no single feature extraction method works well across the different biomedical BERT models. 4) With a thorough grid search, we find substantial increases in F1-Score (e.g., 2.43%), when compared to the default hyperparameters. 5) Overall, our work contributes to defining best practices for the use of embeddings in Siamese Networks. See https: //arxiv.org/abs/2109.13348 for an extension of this paper as it presents an extended analysis of the experiments and additional results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The UMLS Metathesaurus links terms and codes between health records, pharmacy documents, and insurance documents (Bodenreider, 2004) . The Metathesaurus consists of several building blocks, including atoms and concepts. All atoms in the UMLS Metathesaurus are assigned a unique identifier (AUI). Atoms that are synonymous are grouped into a single concept identified with a concept unique identifier (CUI). Table 1 contains examples of synonymous atoms and the identifiers assigned to each respective atom for a Tuple Atom String Source AUI CUI t 1 Headache MSH A0066000 C0018681 t 2 Headaches MSH A0066008 C0018681 t 3 Cephalodynia MSH A26628141 C0018681 t 4",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 407,
"end": 637,
"text": "Table 1 contains examples of synonymous atoms and the identifiers assigned to each respective atom for a Tuple Atom String Source AUI CUI t 1 Headache MSH A0066000 C0018681 t 2 Headaches MSH A0066008 C0018681 t 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "UMLS -Knowledge Representation",
"sec_num": "2"
},
{
"text": "Cephalodynia SNOMEDCT US A2957278 C0018681 Table 1 : Examples tuples from UMLS consisting of an atom string, its source vocabulary name, its unique atom identifier (AUI), and its concept unique identifier (CUI). All tuples in the example table are synonymous and, hence, have the same CUI. particular concept. For example, the term \"Cephalodynia\" appearing in both MSH and SNOMEDCT US has different AUIs as shown in Table 1 . Additionally, the strings \"Headache\" and \"Headaches\" have different AUIs because of the lexical variation (see Table 1 ). We use the 2020AA version of the UMLS, which contains 15.5 million atoms from 214 source vocabularies grouped into 4.28 million concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 1",
"ref_id": null
},
{
"start": 416,
"end": 423,
"text": "Table 1",
"ref_id": null
},
{
"start": 537,
"end": 544,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "UMLS -Knowledge Representation",
"sec_num": "2"
},
{
"text": "An essential part of the UMLS construction process is identifying similar atoms across source vocabularies to integrate knowledge from different sources accurately. The UMLS Vocabulary Alignment (UVA)or synonymy prediction -task is to identify synonymous atoms by measuring the similarity among pairs of atoms. A machine learning model should be able to identify the synonymous atoms are that lexically: similar but are not synonymous and dissimilar but are synonymous. Let (t i , t j ) be a pair of input tuples, where i \u0338 = j. Each tuple is initialized from a different source vocabulary in the form of (str, src, aui), where str is the atom string, src is the source vocabulary, and aui is the atom unique identifier (AUI). Let f : T \u00d7 T \u2192 0, 1 be a prediction function that maps a pair of input tuples to either 0 or 1. If f (t i , t j ) = 1, then the atom strings (str i , str j ) from t i and t j are synonymous and belong to the same concept (and hence, share same the CUI).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "We thank Nguyen et al. for sharing the dataset used in their work . The dataset is created using the 2020AA release of the UMLS Metathesaurus. We use the ALL dataset for our study. for a detailed description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "We first describe the Siamese Networks for the UVA then describe the biomedical BERT variants. Siamese Networks for the UVA Task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Nguyen et al. assess the similarity of atoms using lexical features of the atom strings (str). The authors design a Siamese Network that inputs a pair of atom strings, and outputs a similarity score between 0 and 1, sim(str i , str j ) \u2208 [0, 1] (see Figure 1 ). The inputs are preprocessed, tokenized, and then sent through an initial embedding layer initialized with BioWordVec embeddings (Zhang et al., 2019) . The word embeddings are then fed into Bidirectional Long Short Term Memory (Bi-LSTM) layers, followed by two dense layers. All atom pairs with a similarity > 0.5 are considered synonyms (using the Manhattan distance). Their deep learning model has a precision of 94.64%, recall of 94.96% and an F1-Score of 94.8% and outperforms a rule-based approach for synonymy prediction by 23% in recall, 2.4% in precision, and 14.1% in F1-Score. In their follow-up work, Nguyen et al. add an attention layer after the Bi-LSTM layers that improves the precision by +3.63% but decreases recall by 1.42%.",
"cite_spans": [
{
"start": 390,
"end": 410,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 250,
"end": 258,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this section, we summarize the specific biomedical BERT variants used in this study. For brevity, we focus on biomedical BERT variants and omit the general presentation of BERT. We refer the interested reader to (Devlin et al., 2019) for details. Table 2 compares the different biomedical BERT models used in this benchmarking study. To limit the scope of the biomedical BERT models, we only include models that have been pretrained with data from biomedical sources, such as biomedical terminologies (e.g., UMLS vocabularies), biomedical literature (e.g., PubMed), and clinical notes (e.g., MIMIC-III). BioBERT: BioBERT is initialized from BERT and then pretrained on PubMed abstracts and PubMed Central (PMC) full-text articles (Lee et al., 2020) . We use both BioBERT-Base and BioBERT-Large. BlueBERT: BlueBERT is initialized with BERT weights provided by (Devlin et al., 2019) pretrained with the PubMed Abstract and MIMIC-III datasets. We use BlueBERT-Large in our work.",
"cite_spans": [
{
"start": 215,
"end": 236,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 733,
"end": 751,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 862,
"end": 883,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Biomedical BERT Models",
"sec_num": null
},
{
"text": "SapBERT: SapBERT provides the current state-of-theart (SOTA) results for six medical entity linking benchmarking datasets (Liu et al., 2021) . SapBERT is trained on the UMLS with 4M+ concepts and 10M+ synonyms from over 150 vocabularies.",
"cite_spans": [
{
"start": 122,
"end": 140,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical BERT Models",
"sec_num": null
},
{
"text": "UMLSBERT: UMLSBERT is initialized with the pretrained Bio ClinicalBERT model (Alsentzer et al., 2019) and pretrained with the MLM task on the MIMIC-III dataset with additional modifications. {BioBERT, BlueBERT, UMLSBERT, Vanill-aBERT} + SapBERT: The SapBERT authors pretrain additional variants of SapBERT that are initialized using different BERT variants. We refer the reader to (Liu et al., 2021) for a detailed description.",
"cite_spans": [
{
"start": 77,
"end": 101,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 381,
"end": 399,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical BERT Models",
"sec_num": null
},
{
"text": "To analyze the performance of the different embeddings extracted from the various BERT models, we train the Siamese Network end to end, similar to . We investigate the use of the nine biomedical BERT models (mentioned in Section 5) as a source of word embeddings. Our experimental setup of consists of two primary steps for each of the Siamese Networks (with and without attention): 1) Feature extraction of word embeddings from biomedical BERT Models. 2) Grid search of optimal hyperparameters and optimization. Our code will be available at https://anonymous.4open.science/r/uva embedding benchmarking-8124/. For the training and testing data, we recommend reaching out to Nguyen et al. .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "6"
},
{
"text": "BioWordVec has a fixed word embedding for each word or term (e.g., UMLS atom). For transformer models, word embedding extraction is not as straightforward because different layers of BERT capture different types of features (Jawahar et al., 2019; Liu et al., 2019; Reimers and Gurevych, 2017; Peters et al., 2018; van Aken et al., 2019; Devlin et al., 2019) . We initialize Siamese Networks with token embeddings instead of word embeddings to use BERT models for the UVA task. To extract token embeddings for UMLS atoms from each BERT model, we: 1) Tokenize the atom strings using the model-specific vocabulary. 2) Create a token id tensor by mapping the token strings to their vocabulary indices. 3) Create a segment id tensor. 4) Feed the token id and segment id tensors in to the BERT model (in eval mode). 5) Create a separate token embedding matrix to initialize the Siamese Networks using each of the following methods:",
"cite_spans": [
{
"start": 224,
"end": 246,
"text": "(Jawahar et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 247,
"end": 264,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 265,
"end": 292,
"text": "Reimers and Gurevych, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 293,
"end": 313,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 314,
"end": 336,
"text": "van Aken et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 337,
"end": 357,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction for the Siamese Network",
"sec_num": null
},
{
"text": "\u2022 1 st token embedding and last layer \u2022 1 st token embed. and avg. of last 4 layers \u2022 Last token embedding and last layer \u2022 Last token embed. and avg. of last 4 layers \u2022 Avg. token embedding and last layer \u2022 Avg. token embed. and avg. of last 4 layers Of note, we do not use the \"CLS\" sentence representation as the word embedding for UMLS atoms because the Bi-LSTMs layers require a sequence as input. We only use the atom string to extract token embeddings because all vocabularies in the UMLS have this characteristic in common. In summary, we extract two sets of embeddings from each model (the 12 th layer and average of the 9 th to 12 th layers) and use three different types of token embeddings (the first and last occurrence of the token in the dataset and the average embedding of each occurrence of the token in the dataset).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction for the Siamese Network",
"sec_num": null
},
{
"text": "The performance of deep learning models highly depends on the selection of hyperparameters (Hutter et al., 2014; Bergstra and Bengio, 2012; Reimers and Gurevych, 2017) . Prior work by Nguyen et al. uses a fixed set of hyperparameters. Therefore, we conduct a grid search for the best-performing models to thoroughly investigate the performance of the Siamese Networks. Hyperparameters used in our experiment include optimizer (SGD, Adam) and learning rate (0.00001, 0.0001, 0.001, 0.01, 0.1). To limit computational cost, we conduct a grid search for the following Siamese Networks: BioWordVec (BWV), BioWordVec + Attention (BWV + Att.), SapBERT avg. token embedding extracted by averaging the last 4 layers (SB Avg Token + Avg Last 4), SapBERT avg. token embedding extracted from the last layer + Attention (SB Avg Token + Last Lay + Att.). Additionally, Nguyen et al. provide no rationale for the similarity threshold of 0.5 between the learned representations of two atoms. Therefore, we search for the best threshold for prediction based on the precision-recall curve to find a threshold that maximizes the F1-Score. Table 3 presents the synonymy prediction results using embeddings extracted from BERT models and BioWordVec embeddings. The Token Type and Extraction Method columns indicate the feature extraction method that was used to initialize the model. Performance with BERT Embeddings: We find that Siamese Networks initialized with BioWordVec still outperform all models initialized with embeddings ex-Siamese Network without Attention Siamese Network with Attention Embedding Table 3 : Results for Siamese Networks trained for 100 iterations initialized using different embeddings using the best prediction threshold (single run point estimates). Rows marked with \"w.\" contain the performance of the models after grid search.",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "(Hutter et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 113,
"end": 139,
"text": "Bergstra and Bengio, 2012;",
"ref_id": "BIBREF1"
},
{
"start": 140,
"end": 167,
"text": "Reimers and Gurevych, 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 1121,
"end": 1128,
"text": "Table 3",
"ref_id": null
},
{
"start": 1590,
"end": 1597,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grid Search and Optimization",
"sec_num": null
},
{
"text": "tracted from biomedical BERT models. Though surprising, Schulz and Juric also find that current embeddings are limited in their ability to adequately encode medical terms when tested on large-scale datasets (Schulz and Juric, 2020) . Moreover, using a BERT model trained on more relevant domain-specific data and the right task yields more substantial gains. In particular, the SapBERT model, whose embeddings achieve the highest performance, is trained on PubMed and incorporates knowledge from the UMLS Metathesaurus by using semantic type embeddings and modifying the MLM task to indicate if which words belong to the same concept. These changes likely indicate why it outperforms the other biomedical BERT models for our task. Feature Extraction for Biomedical BERT Models: Based on our experiments, no single feature extraction method provides the most useful embedding for all BERT models. However, results indicate that averaging all token embeddings and using the average of the last four hidden layers seems to work well for many of the models. The Siamese Network + Attention initialized with the average token embedding extracted from the last layer of SapBERT achieves the best F1-Score. Performance after Grid Search: As mentioned in Section 6, we limit the grid search to the four best performing models: BWV, BWV + Att., SB Avg Token + Avg Last 4, and SB Avg Token + Last Lay + Att. Our grid search results indicate that the Siamese Network without attention outperforms the Siamese Network with attention when initialized with BioWordVec embeddings. Additionally, there is a 2.43% increase in F1-Score for the Siamese Network with attention and a 3.11% increase in F1-Score for the Siamese Network w.o. attention. Reducing the batch size leads to early stopping for all models but at the cost of performance (e.g, 4.67% drop in F1-Score for BWV + Att. w. SGD).",
"cite_spans": [
{
"start": 207,
"end": 231,
"text": "(Schulz and Juric, 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "Optimizer. For the four best performing models, we see that SGD works better in three of the cases. For only one model, Adam performs similarly to SGD with a higher F1-Score by 0.16%. There is a 1% increase in F-1 Score for the Siamese Network with Attention ini-tialized with SB + Avg Token + Last Lay embeddings. Using the SGD optimizer leads to earlier convergence for when using biomedical BERT embeddings. Learning Rate. Regardless of the optimizer, increasing the learning rate (LR) to 0.01 and 0.1 leads to early stopping and results in poor F1-Scores. With a LR of 0.0001, the performance for the Siamese Networks initialized with SapBERT embeddings extracted using the average token and the last layer of the SapBERT model, F1-Score increases by about 0.6% for the model with attention and a 3.11% increase for the model without attention. Reducing the LR further decreases performance for Siamese Networks using BWV embeddings. Threshold. The best performing thresholds range from 0.5438 to 0.581. On average using the best thresholds results in 0.0086% increase in F1-Score for the Siamese Networks without attention (results omitted due to space). Hence, 0.5 is an acceptable threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "We investigate if contextualized embeddings extracted from biomedical BERT-based language models can improve the performance of Siamese Networks, introduced by , to predict synonymy in the UMLS Metathesaurus. Despite the excellent performance of BERT models on biomedical NLP tasks, BioWordVec embeddings still remain competitive for the UVA task. This confirms the importance of investigating the use of traditional distributed word embeddings. Among the biomedical BERT models, SapBERT trained on UMLS data performs best, suggesting the importance of using a model trained on datasets directly relevant to the task at hand. Finally, we demonstrate the importance of exploring different feature extraction methods and hyperparameter tuning for deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We thank Nguyen et al. for sharing the dataset used in their work . To get a copy of the dataset, please sign the UMLS License Agreement and email Nguyen to receive the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dataset",
"sec_num": null
},
{
"text": "We first train both Siamese Networks (with attention and without attention ) with the default hyperparameters for each biomedical BERT model with each of the different embedding extraction methods. The default hyperparameters rely on Adam as the optimizer with a learning rate of 0.001 and 8192 examples in batch. This results in 20 different Siamese Networks, each trained for 100 epochs. Next, we take the best performing Siamese models initialized with BERT embeddings and the two Siamese models initialized with BioWord-Vec embeddings and conduct a grid search to find the optimal hyperparameters. We conduct a grid search for a total of 4 Siamese Networks and evaluate each model using the following metrics: Accuracy, Precision, Recall, F1-Score, and AUC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Experimental Details",
"sec_num": null
},
{
"text": "All experiments are run using a High Performance Computing cluster. The typical run time for a Siamese Network with BioWordVec embeddings is 48 hours for 100 iterations using a v100x NVIDIA GPU and requires about 220 GB of memory. A Siamese Network trained with BERT embeddings takes about 72 hours for 100 iterations using a v100x NVIDIA GPU and requires about 220 GB of memory. The training time is further increased to 88 hours for Siamese Networks trained with embeddings of dimensions 1024 (i.e., BioBERT-Large and BlueBERT embeddings).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Experimental Details",
"sec_num": null
},
{
"text": "Our work evaluates biomedical word embeddings extracted from BERT-based models for the Siamese Networks introduced by . Our list of biomedical BERT models does not include all models; we consider the most recent biomedical BERT models that have achieved SOTA performance on NLP tasks. The narrow focus of our work allows us to conduct a thorough analysis of the embedding extraction methods and hyperparameters using nine different BERT models for two variants of the Siamese Network. However, our experimental setup is reproducible for similar NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Limitations",
"sec_num": null
},
{
"text": "As an additional exercise to test the usability of transformer based embeddings, we attempt to use the \"CLS\" sentence representation of the UMLS atoms. For a pair of UMLS atoms, we extract the \"CLS\" sentence representation of each UMLS atom and compute the similarity of the representation using both the Cosine and Manhattan distance functions. We find that this approach does not work well (< 30% accuracy). As future work, we can investigate if adding a deep neural net (different from a Siamese Network) can improve the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Limitations",
"sec_num": null
},
{
"text": "https://uts.nlm.nih.gov/uts/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Liu et al. for providing the additional pretrained SapBERT models (Liu et al., 2021 ) and a cooperative AI Institute grant (AI-EDGE), from the National Science Foundation under CNS-2112471.",
"cite_spans": [
{
"start": 84,
"end": 101,
"text": "(Liu et al., 2021",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "9"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clin- ical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Random search for hyper-parameter optimization",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bergstra",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Machine Learning Research",
"volume": "13",
"issue": "",
"pages": "281--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13:281-305.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The unified medical language system (UMLS): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "1",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The unified medical language system (UMLS): integrating biomed- ical terminology. Nucleic acids research, 32(suppl 1):D267-D270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of NAACL-HLT, pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An efficient approach for assessing hyperparameter importance",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Hoos",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Leyton-Brown",
"suffix": ""
}
],
"year": 2014,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "754--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. 2014. An efficient approach for assessing hyperpa- rameter importance. In International conference on machine learning, pages 754-762. PMLR.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What does BERT learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL 2019-57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In ACL 2019-57th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Self-alignment pretraining for biomedical entity representations",
"authors": [
{
"first": "Fangyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ehsan",
"middle": [],
"last": "Shareghi",
"suffix": ""
},
{
"first": "Zaiqiao",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Basaldella",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4228--4238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. 2021. Self-alignment pretraining for biomedical entity representations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4228-4238.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linguistic knowledge and transferability of contextual representations",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1073--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adding an Attention Layer Improves the Performance of a Neural Network Architecture for Synonymy Prediction in the UMLS Metathesaurus",
"authors": [
{
"first": "Vinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2021,
"venue": "MedInfo",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinh Nguyen and Olivier Bodenreider. 2021. Adding an Attention Layer Improves the Performance of a Neural Network Architecture for Synonymy Predic- tion in the UMLS Metathesaurus. In MedInfo.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Biomedical Vocabulary Alignment at Scale in the UMLS Metathesaurus",
"authors": [
{
"first": "Vinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Hong",
"middle": [
"Yung"
],
"last": "Yip",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Web Conference 2021",
"volume": "",
"issue": "",
"pages": "2672--2683",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinh Nguyen, Hong Yung Yip, and Olivier Boden- reider. 2021. Biomedical Vocabulary Alignment at Scale in the UMLS Metathesaurus. In Proceedings of the Web Conference 2021, pages 2672-2683.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. ACL 2019, page 7.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "338--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Report- ing Score Distributions Makes a Difference: Perfor- mance Study of LSTM-networks for Sequence Tag- ging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Can embeddings adequately represent medical terminology? new large-scale medical term similarity datasets have the answer!",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Damir",
"middle": [],
"last": "Juric",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "8775--8782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Schulz and Damir Juric. 2020. Can em- beddings adequately represent medical terminology? new large-scale medical term similarity datasets have the answer! In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 34, pages 8775-8782.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "How does bert answer questions? a layer-wise analysis of transformer representations",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Betty Van Aken",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Winter",
"suffix": ""
},
{
"first": "Felix",
"middle": [
"A"
],
"last": "L\u00f6ser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gers",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1823--1832",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Betty van Aken, Benjamin Winter, Alexander L\u00f6ser, and Felix A Gers. 2019. How does bert answer questions? a layer-wise analysis of transformer rep- resentations. In Proceedings of the 28th ACM Inter- national Conference on Information and Knowledge Management, pages 1823-1832.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BioWordVec, improving biomedical word embeddings with subword information and MeSH",
"authors": [
{
"first": "Yijia",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Scientific data",
"volume": "6",
"issue": "1",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijia Zhang, Qingyu Chen, Zhihao Yang, Hongfei Lin, and Zhiyong Lu. 2019. BioWordVec, improving biomedical word embeddings with subword infor- mation and MeSH. Scientific data, 6(1):1-9.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Siamese Network used for Synonymy Prediction. Nguyen et al. use BioWordVec embeddings, whereas we use contextualized word embeddings. \"*\" indicates optional attention layer.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td>Model Type</td><td>Embed. Dim. Vocab Size Token Size</td></tr><tr><td>BioWordVec BioBERT (+ SapBERT) BioBERT-Large (Cased) BlueBERT SapBERT UMLSBERT (+ SapBERT) BlueBERT+ SapBERT VanillaBERT + SapBERT</td><td>200 268,158,600 768 28,996 13,230,336 -1024 58,996 28,530,688 1024 30,522 25,358,336 768 30,522 21,035,520 768 28,996 13,230,336 768 30,522 19,018,752 768 30,522 19,018,752</td></tr></table>",
"type_str": "table",
"text": "and further"
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Comparison of different biomedical word embeddings in terms of the embedding dimension, vocabulary size, and the number of tokens."
}
}
}
}