ACL-OCL / Base_JSON /prefixB /json /bionlp /2021.bionlp-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:24.057054Z"
},
"title": "End-to-end Biomedical Entity Linking with Span-based Dictionary Matching",
"authors": [
{
"first": "Shogo",
"middle": [],
"last": "Ujiie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology \u2665 Megagon Labs",
"location": {}
},
"email": "ujiie@is.naist.jp"
},
{
"first": "Hayate",
"middle": [],
"last": "Iso",
"suffix": "",
"affiliation": {},
"email": "hayate@magagon.ai"
},
{
"first": "Shuntaro",
"middle": [],
"last": "Yada",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology \u2665 Megagon Labs",
"location": {}
},
"email": "yada-s@is.naist.jp"
},
{
"first": "Shoko",
"middle": [],
"last": "Wakamiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology \u2665 Megagon Labs",
"location": {}
},
"email": "wakamiya@is.naist.jp"
},
{
"first": "Eiji",
"middle": [],
"last": "Aramaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology \u2665 Megagon Labs",
"location": {}
},
"email": "aramaki@is.naist.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Disease name recognition and normalization , which is generally called biomedical entity linking, is a fundamental process in biomedical text mining. Recently, neural joint learning of both tasks has been proposed to utilize the mutual benefits. While this approach achieves high performance, disease concepts that do not appear in the training dataset cannot be accurately predicted. This study introduces a novel end-to-end approach that combines span representations with dictionary-matching features to address this problem. Our model handles unseen concepts by referring to a dictionary while maintaining the performance of neural network-based models, in an end-to-end fashion. Experiments using two major datasets demonstrate that our model achieved competitive results with strong baselines, especially for unseen concepts during training.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Disease name recognition and normalization , which is generally called biomedical entity linking, is a fundamental process in biomedical text mining. Recently, neural joint learning of both tasks has been proposed to utilize the mutual benefits. While this approach achieves high performance, disease concepts that do not appear in the training dataset cannot be accurately predicted. This study introduces a novel end-to-end approach that combines span representations with dictionary-matching features to address this problem. Our model handles unseen concepts by referring to a dictionary while maintaining the performance of neural network-based models, in an end-to-end fashion. Experiments using two major datasets demonstrate that our model achieved competitive results with strong baselines, especially for unseen concepts during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Identifying disease names , which is generally called biomedical entity linking, is the fundamental process of biomedical natural language processing, and it can be utilized in applications such as a literature search system ) and a biomedical relation extraction (Xu et al., 2016 ). The usual system to identify disease names consists of two modules: named entity recognition (NER) and named entity normalization (NEN). NER is the task that recognizes the span of a disease name, from the start position to the end position. NEN is the post-processing of NER, normalizing a disease name into a controlled vocabulary, such as a MeSH or Online Mendelian Inheritance in Man (OMIM).",
"cite_spans": [
{
"start": 264,
"end": 280,
"text": "(Xu et al., 2016",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although most previous studies have developed pipeline systems, in which the NER model first recognizs disease mentions Weber et al., 2020) and the NEN model normalizes the recognized mention (Leaman et al., 2013; Ferr\u00e9 et al., 2020; Xu et al., 2020; Vashishth et al., 2020) , a few approaches employ a joint learning architecture for these tasks Lou et al., 2017) . These joint approaches simultaneously recognize and normalize disease names utilizing their mutual benefits. For example, Leaman et al. (2013) demonstrated that dictionary-matching features, which are commonly used for NEN, are also effective for NER. While these joint learning models achieve high performance for both NER and NEN, they predominately rely on hand-crafted features, which are difficult to construct because of the domain knowledge requirement.",
"cite_spans": [
{
"start": 120,
"end": 139,
"text": "Weber et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 192,
"end": 213,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 214,
"end": 233,
"text": "Ferr\u00e9 et al., 2020;",
"ref_id": null
},
{
"start": 234,
"end": 250,
"text": "Xu et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 251,
"end": 274,
"text": "Vashishth et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 347,
"end": 364,
"text": "Lou et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 489,
"end": 509,
"text": "Leaman et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, a neural network (NN)-based model that does not require any hand-crafted features was applied to the joint learning of NER and NEN (Zhao et al., 2019) . NER and NEN were defined as two token-level classification tasks, i.e., their model classified each token into IOB2 tags and concepts, respectively. Although their model achieved the state-of-the-art performance for both NER and NEN, a concept that does not appear in training data (i.e., zero-shot situation) can not be predicted properly.",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One possible approach to handle this zero-shot situation is utilizing the dictionary-matching features. Suppose that an input sentence \"Classic polyarteritis nodosa is a systemic vasculitis\" is given, where \"polyarteritis nodosa\" is the target entity. Even if it does not appear in the training data, it can be recognized and normalized by referring to a controlled vocabulary that contains \"Polyarteritis Nodosa (MeSH: D010488).\" Combining such looking-up mechanisms with NN-based models, however, is not a trivial task; dictionary matching must be performed at the entity-level, whereas standard NN-based NER and NEN tasks are performed at the token-level (for example, Zhao et al., 2019) .",
"cite_spans": [
{
"start": 672,
"end": 690,
"text": "Zhao et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To overcome this problem, we propose a novel end-to-end approach for NER and NEN that com- The overview of our model. It combines the dictionary-matching scores with the context score obtained from PubMedBERT. The red boxes are the target span and \"ci\" in the figure is the \"i\"-th concept in the dictionary. bines dictionary-matching features with NN-based models. Based on the span-based model introduced by Lee et al. 2017, our model first computes span representations for all possible spans of the input sentence and then combines the dictionarymatching features with the span representations. Using the score obtained from both features, it directly classifies the disease concept. Thus, our model can handle the zero-shot problem by using dictionary-matching features while maintaining the performance of the NN-based models. Our model is also effective in situations other than the zero-shot condition. Consider the following input sentence: \"We report the case of a patient who developed acute hepatitis,\" where \"hepatitis\" is the target entity that should be normalized to \"drug-induced hepatitis.\" While the longer span \"acute hepatitis\" also appears plausible for standalone NER models, our end-to-end architecture assigns a higher score to the correct shorter span \"hepatitis\" due to the existence of the normalized term (\"drug-induced hepatitis\") in the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Through the experiments using two major NER and NEN corpora, we demonstrate that our model achieves competitive results for both corpora. Further analysis illustrates that the dictionarymatching features improve the performance of NEN in the zero-shot and other situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are twofold: (i) We propose a novel end-to-end model for disease name recognition and normalization that utilizes both NN-based features and dictionary-matching features; (ii) We demonstrate that combining dictionary-matching features with an NN-based model is highly effective for normalization, especially in the zero-shot situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an input sentence, which is a sequence of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "x = {x 1 , x 2 , \u2022 \u2022 \u2022 , x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "|X| } in the biomedical literature, let us define S as a set of all possible spans, and L as a set of concepts that contains the special label Null for a non-disease span. Our goal is to predict a set of labeled spans",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "y = { i, j, d k } |Y | k=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": ", where (i, j) \u2208 S is the word index in the sentence, and d \u2208 L is the concept of diseases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "Our model predicts the concepts for each span based on the score, which is represented by the weighted sum of two factors: the context score score cont obtained from span representations and the dictionary-matching score score dict . Figure 1 illustrates the overall architecture of our model. We denote the score of the span s as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "score(s, c) = score cont (s, c) + \u03bbscore dict (s, c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "where c \u2208 L is the candidate concept and \u03bb is the hyperparameter that balances the scores. For the concept prediction, the scores of all possible spans and concepts are calculated, and then the concept with the highest score is selected as the predicted concept for each span as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "y = arg max c\u2208L score(s, c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "Context score The context score is computed in a similar way to that of Lee et al. (2017) , which is based on the span representations. To compute the representations of each span, the input tokens are first encoded into the token embeddings. We used BioBERT as the encoder, which is a variation of bidirectional encoder representations from transformers (BERT) that is trained on a large amount of biomedical text. Given an input sentence containing T words, we can obtain the contextualized embeddings of each token using BioBERT as follows:",
"cite_spans": [
{
"start": 72,
"end": 89,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "h 1:T = BERT(x 1 , x 2 , \u2022 \u2022 \u2022 , x T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "where h 1:T is the input tokens embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "Span representations are obtained by concatenating several features from the token embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "g s = [h start(s) , h end(s) ,\u0125 s , \u03c6(s)] g s = GELU(FFNN(g s ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "where h start(s) and h end(s) are the start and end token embeddings of the span, respectively; and\u0125 s is the weighted sum of the token embeddings in the span, which is obtained using an attention mechanism (Bahdanau et al., 2015) . \u03c6(i) is the size of span s. These representations g s are then fed into a simple feed-forward NN, FFNN, and a nonlinear function, GELU (Hendrycks and Gimpel, 2016) .",
"cite_spans": [
{
"start": 207,
"end": 230,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 368,
"end": 396,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "Given a particular span representation and a candidate concept as the inputs, we formulate the context score as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "score cont (s, c) = g s \u2022 W c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "where W \u2208 R |L|\u00d7d g is the weight matrix associated with each concept c, and W c represents the weight vector for the concept c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "Dictionary-matching score We used the cosine similarity of the TF-IDF vectors as the dictionarymatching features. Because there are several synonyms for a concept, we calculated the cosine similarity for all synonyms of the concept and used the maximum cosine similarity as the score for each concept. The TF-IDF is calculated using the character-level n-gram statistics computed for all diseases appearing in the training dataset and controlled vocabulary. For example, given the span \"breast cancer,\" synonyms with high cosine similarity are \"breast cancer (1.0)\" and \"male breast cancer (0.829).\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2"
},
{
"text": "To evaluate our model, we chose two major datasets used in disease name recognition and normalization against a popular controlled vocabulary, MEDIC (Davis et al., 2012) . Both datasets, the National Center for Biotechnology Information Disease (NCBID) corpus (Dogan et al., 2014) and the BioCreative V Chemical Disease Relation (BC5CDR) task corpus (Li et al., 2016) , comprise of PubMed titles and abstracts annotated with disease names and their corresponding normalized term IDs (CUIs). NCBID provides 593 training, 100 development, and 100 test data splits, while BC5CDR evenly divides 1500 data into the three sets. We adopted the same version of MEDIC as TaggerOne used, and that we dismissed non-disease entity annotations contained in BC5CDR.",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "(Davis et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 260,
"end": 280,
"text": "(Dogan et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 350,
"end": 367,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 3.1 Datasets",
"sec_num": "3"
},
{
"text": "We compared several baselines to evaluate our model. DNorm (Leaman et al., 2013) and NormCo (Wright et al., 2019) were used as pipeline models due to their high performance. In addition, we used the pipeline systems consisting of stateof-the-art models: BioBERT for NER and BioSyn (Sung et al., 2020) for NEN.",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Leaman et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 92,
"end": 113,
"text": "(Wright et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 281,
"end": 300,
"text": "(Sung et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "3.2"
},
{
"text": "TaggerOne and Transition-based model (Lou et al., 2017) are used as joint-learning models. These models outperformed the pipeline models in NCBID and BC5CDR. For the model introduced by Zhao et al. (2019), we cannot reproduce the performance reported by them. Instead, we report the performance of the simple token-level joint learning model based on the BioBERT, which referred as \"joint (token)\".",
"cite_spans": [
{
"start": 37,
"end": 55,
"text": "(Lou et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "3.2"
},
{
"text": "We performed several preprocessing steps: splitting the text into sentences using the NLTK toolkit (Bird et al., 2009) , removing punctuations, and resolving abbreviations using Ab3P (Sohn et al., 2008) , a common abbreviation resolution module. We also merged disease names in each training set into a controlled vocabulary, following the methods of Lou et al. (2017) .",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 183,
"end": 202,
"text": "(Sohn et al., 2008)",
"ref_id": "BIBREF13"
},
{
"start": 351,
"end": 368,
"text": "Lou et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "For training, we set the learning rate to 5e-5, and mini-batch size to 32. \u03bb was set to 0.9 using the development sets. For BC5CDR, we trained the model using both the training and development sets following . For computational efficiency, we only consider spans with up to 10 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3"
},
{
"text": "We evaluated the recognition performance of our model using micro-F1 at the entity level. We consider the predicted spans as true positive when their spans are identical. Following the previous work (Wright et al., 2019; , the performance of NEN was evaluated using micro-F1 at the abstract level. If a predicted concept was found within the gold standard concepts in the abstract, regardless of its location, it was considered as a true positive. Table 1 illustrates that our model mostly achieved the highest F1-scores in both NER and NEN, except for the NEN in BC5CDR, in which the transition-based model displays its strength as a baseline. The proposed model outperformed the pipeline model of the state-of-the-art models for both tasks, which demonstrates that the improvement is attributed not to the strength of BioBERT but the model architecture, including the endto-end approach and combinations of dictionarymatching features.",
"cite_spans": [
{
"start": 199,
"end": 220,
"text": "(Wright et al., 2019;",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 448,
"end": 455,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.4"
},
{
"text": "Comparing the model variation results, adding dictionary-matching features improved the performance in NEN. The results clearly suggest that dictionary-matching features are effective for NNbased NEN models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussions",
"sec_num": "4"
},
{
"text": "To analyze the behavior of our model in the zeroshot situation, we investigated the NEN performance on two subsets of both corpora: disease names with concepts that appear in the training data (i.e., standard situation), and disease names with concepts that do not appear in the training data (i.e., the zero-shot situation). Table 2 shows the number of mentions and concepts in each situation. Table 3 displays the results of the zero-shot and standard situation. The proposed model with dictionary-matching features can classify disease concepts in the zero-shot situation, whereas the NN-based classification model cannot normalize the disease names. The results of the standard situation demonstrate that combining dictionary-matching features also improves the performance even when target concepts appear in the training data. This finding implies that an NN-based model can benefit from dictionary-matching features, even if the models can learn from many training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 333,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 395,
"end": 402,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Contribution of Dictionary-Matching",
"sec_num": "4.1"
},
{
"text": "We examined 100 randomly sampled sentences to determine the contributions of dictionary-matching features. There are 32 samples in which the models predicted concepts correctly by adding dictionarymatching features. Most of these samples are disease concepts that do not appear in the training set but appear in the dictionary. For example, \"pure red cell aplasis (MeSH: D012010)\" is not in the BC5CDR training set while the MEDIC contains \"Pure Red-Cell Aplasias\" for \"D012010\". In this case, a high dictionary-matching score clearly leads to a correct prediction in the zero-shot situation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case study",
"sec_num": "4.2"
},
{
"text": "In contrast, there are 32 samples in which the dictionary-matching features cause errors. The sources of this error type are typically general disease names in the MEDIC. For example, \"Death (MeSH:D003643)\" is incorrectly predicted as a disease concept in NER. Because these words are also used in the general context, our model overestimated their dictionary-matching scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case study",
"sec_num": "4.2"
},
{
"text": "Furthermore, in the remaining samples, our model predicted the code properly and the span incorrectly. For example, although \"thoracic hematomyelia\" is labeled as \"MeSH: D020758\" in the BC5CDR test set, our model recognized this as \"hematomyelia.\" In this case, our model mostly relied on the dictionary-matching features and misclassifies the span because 'hematomyelia\" is in the MEDIC but not in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case study",
"sec_num": "4.2"
},
{
"text": "Our model is inferior to the transition-based model for BC5CDR. One possible reason is that the transition-based model utilizes normalized terms that co-occur within a sentence, whereas our model does not. Certain disease names that co-occur within a sentence are strongly useful for normalizing disease names. Although BERT implicitly considers the interaction between disease names via the attention mechanism, a more explicit method is preferable for normalizing diseases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4.3"
},
{
"text": "Another limitation is that our model treats the dictionary entries equally. Because certain terms in the dictionary may also be used for non-disease concepts, such as gene names, we must consider the relative importance of each concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4.3"
},
{
"text": "We proposed a end-to-end model for disease name recognition and normalization that combines the NN-based model with the dictionary-matching features. Our model achieved highly competitive results for the NCBI disease corpus and BC5CDR corpus, demonstrating that incorporating dictionary-matching features into an NN-based model can improve its performance. Further experiments exhibited that dictionary-matching features enable our model to accurately predict the concepts in the zero-shot situation, and they are also beneficial in the other situation. While the results illustrate the effectiveness of our model, we found several areas for improvement, such as the general terms in the dictionary and the interaction between disease names within a sentence. A possible future direction to deal with general terms is to jointly train the parameters representing the importance of each synonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural language processing with Python: Analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: Analyz- ing text with the natural language toolkit. O'Reilly Media, Inc.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "MEDIC: A practical disease vocabulary used at the comparative toxicogenomics database",
"authors": [
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Michael C Rosenstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
}
],
"year": 2012,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Peter Davis, Thomas C Wiegers, Michael C Rosenstein, and Carolyn J Mattingly. 2012. MEDIC: A practical disease vocabulary used at the com- parative toxicogenomics database. Database, 2012:bar065.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "NCBI disease corpus: A resource for disease name recognition and concept normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Biomed. Inform",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for dis- ease name recognition and concept normalization. J. Biomed. Inform., 47:1-10.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pierre Zweigenbaum, and Claire N\u00e9dellec. 2020. C-Norm: a neural approach to few-shot entity normalization",
"authors": [
{
"first": "Arnaud",
"middle": [],
"last": "Ferr\u00e9",
"suffix": ""
},
{
"first": "Louise",
"middle": [],
"last": "Del\u00e9ger",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bossy",
"suffix": ""
}
],
"year": null,
"venue": "BMC Bioinformatics",
"volume": "21",
"issue": "23",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnaud Ferr\u00e9, Louise Del\u00e9ger, Robert Bossy, Pierre Zweigenbaum, and Claire N\u00e9dellec. 2020. C-Norm: a neural approach to few-shot entity normalization. BMC Bioinformatics, 21(Suppl 23):579.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Gaussian error linear units (GELUs)",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.08415"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaus- sian error linear units (GELUs). arXiv preprint arXiv:1606.08415.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "DNorm: Disease name normalization with pairwise learning to rank",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Bioinformatics",
"volume": "29",
"issue": "22",
"pages": "2909--2917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. DNorm: Disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "TaggerOne: Joint named entity recognition and normalization with semi-markov models",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "18",
"pages": "2839--2846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman and Zhiyong Lu. 2016. TaggerOne: Joint named entity recognition and normaliza- tion with semi-markov models. Bioinformatics, 32(18):2839-2846.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion. In EMNLP, pages 188-197.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BEST: Next-Generation biomedical entity search tool for knowledge discovery from biomedical literature",
"authors": [
{
"first": "Sunwon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kyubum",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jaehoon",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Seongsoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Minji",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Sangrak",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Donghee",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Aik-Choon",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2016,
"venue": "PLoS One",
"volume": "11",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunwon Lee, Donghyeon Kim, Kyubum Lee, Jae- hoon Choi, Seongsoon Kim, Minji Jeon, Sangrak Lim, Donghee Choi, Sunkyu Kim, Aik-Choon Tan, and Jaewoo Kang. 2016. BEST: Next-Generation biomedical entity search tool for knowledge dis- covery from biomedical literature. PLoS One, 11(10):e0164680.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BioCreative V CDR task corpus: A resource for chemical disease relation extraction",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robin",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Leaman",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Davis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: A resource for chemical disease relation extraction. Database, 2016:baw068.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A transition-based joint model for disease named entity recognition and normalization",
"authors": [
{
"first": "Yinxia",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shufeng",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2017,
"venue": "Bioinformatics",
"volume": "33",
"issue": "15",
"pages": "2363--2371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinxia Lou, Yue Zhang, Tao Qian, Fei Li, Shufeng Xiong, and Donghong Ji. 2017. A transition-based joint model for disease named entity recognition and normalization. Bioinformatics, 33(15):2363-2371.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Abbreviation definition identification based on automatic precision estimates",
"authors": [
{
"first": "Sunghwan",
"middle": [],
"last": "Sohn",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Won",
"middle": [],
"last": "Comeau",
"suffix": ""
},
{
"first": "W John",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilbur",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC Bioinformatics",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunghwan Sohn, Donald C Comeau, Won Kim, and W John Wilbur. 2008. Abbreviation definition iden- tification based on automatic precision estimates. BMC Bioinformatics, 9:402.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Biomedical entity representations with synonym marginalization",
"authors": [
{
"first": "Mujeen",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Hwisang",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "3641--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mujeen Sung, Hwisang Jeon, Jinhyuk Lee, and Jae- woo Kang. 2020. Biomedical entity representations with synonym marginalization. In ACL, pages 3641- 3650.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "MedType: Improving Medical Entity Linking with Semantic Type Prediction",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Newman-Griffis",
"suffix": ""
},
{
"first": "Ritam",
"middle": [],
"last": "Dutt",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00460"
]
},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Denis Newman- Griffis, Ritam Dutt, and Carolyn Rose. 2020. MedType: Improving Medical Entity Linking with Semantic Type Prediction. arXiv preprint arXiv:2005.00460.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "HUNER: improving biomedical NER with pretraining",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Jannes",
"middle": [],
"last": "M\u00fcnchmeyer",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Habibi",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Leser",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "36",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Weber, Jannes M\u00fcnchmeyer, Tim Rockt\u00e4schel, Maryam Habibi, and Ulf Leser. 2020. HUNER: im- proving biomedical NER with pretraining. Bioinfor- matics, 36(1):295-302.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "NormCo: Deep disease normalization for biomedical knowledge base construction",
"authors": [
{
"first": "Dustin",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Katsis",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Chun-Nan",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2019,
"venue": "AKBC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dustin Wright, Yannis Katsis, Raghav Mehta, and Chun-Nan Hsu. 2019. NormCo: Deep disease nor- malization for biomedical knowledge base construc- tion. In AKBC.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Generate-and-Rank framework with semantic type regularization for biomedical concept normalization",
"authors": [
{
"first": "Dongfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "8452--8464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongfang Xu, Zeyu Zhang, and Steven Bethard. 2020. A Generate-and-Rank framework with semantic type regularization for biomedical concept normal- ization. In ACL, pages 8452-8464.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "CD-REST: A system for extracting chemical-induced disease relation in literature",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yaoyun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jingqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hee-Jin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Xu, Yonghui Wu, Yaoyun Zhang, Jingqi Wang, Hee-Jin Lee, and Hua Xu. 2016. CD-REST: A sys- tem for extracting chemical-induced disease relation in literature. Database, 2016:baw036.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A neural multi-task learning framework to jointly model medical named entity recognition and normalization",
"authors": [
{
"first": "Sendong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sicheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "817--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sendong Zhao, Ting Liu, Sicheng Zhao, and Fei Wang. 2019. A neural multi-task learning framework to jointly model medical named entity recognition and normalization. In AAAI, pages 817-824.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 1: The overview of our model. It combines the dictionary-matching scores with the context score obtained from PubMedBERT. The red boxes are the target span and \"ci\" in the figure is the \"i\"-th concept in the dictionary.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "F1 scores of NER and NEN in NCBID and BC5CDR. Bold font represents the highest score.",
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td/><td>Methods</td><td colspan=\"2\">NCBID BC5CDR</td></tr><tr><td>zero-shot</td><td>Ours without dictionary Ours</td><td>0 0.704</td><td>0 0.597</td></tr><tr><td>standard</td><td>Ours without dictionary Ours</td><td>0.854 0.905</td><td>0.846 0.877</td></tr></table>",
"html": null,
"text": "Number of mentions and concepts in standard and zero-shot situations.",
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>: F1 scores for NEN of NCBID and BC5CDR</td></tr><tr><td>subsets for zero-shot situation where disease concepts</td></tr><tr><td>do not appear in training data and the standard situation</td></tr><tr><td>where they do appear in training data.</td></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
}
}
}
}