|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:07:45.246806Z" |
|
}, |
|
"title": "Improving Biomedical Pretrained Language Models with Knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tsinghua University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yijia", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Alibaba Group", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chuanqi", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Alibaba Group", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "chuanqi.tcq@alibaba-inc.com" |
|
}, |
|
{ |
|
"first": "Songfang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Alibaba Group", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "songfang.hsf@alibaba-inc.com" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Alibaba Group", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "f.huang@alibaba-inc.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Pretrained language models have shown success in many natural language processing tasks. Many works explore incorporating knowledge into language models. In the biomedical domain, experts have taken decades of effort on building large-scale knowledge bases. For example, the Unified Medical Language System (UMLS) contains millions of entities with their synonyms and defines hundreds of relations among entities. Leveraging this knowledge can benefit a variety of downstream tasks such as named entity recognition and relation extraction. To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases. Specifically, we extract entities from PubMed abstracts and link them to UMLS. We then train a knowledge-aware language model that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation. Besides, we add two training objectives as entity detection and entity linking. Experiments on the named entity recognition and relation extraction from the BLURB benchmark demonstrate the effectiveness of our approach. Further analysis on a collected probing dataset shows that our model has better ability to model medical knowledge.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Pretrained language models have shown success in many natural language processing tasks. Many works explore incorporating knowledge into language models. In the biomedical domain, experts have taken decades of effort on building large-scale knowledge bases. For example, the Unified Medical Language System (UMLS) contains millions of entities with their synonyms and defines hundreds of relations among entities. Leveraging this knowledge can benefit a variety of downstream tasks such as named entity recognition and relation extraction. To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases. Specifically, we extract entities from PubMed abstracts and link them to UMLS. We then train a knowledge-aware language model that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation. Besides, we add two training objectives as entity detection and entity linking. Experiments on the named entity recognition and relation extraction from the BLURB benchmark demonstrate the effectiveness of our approach. Further analysis on a collected probing dataset shows that our model has better ability to model medical knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Large-scale pretrained language models (PLMs) are proved to be effective in many natural language processing (NLP) tasks (Peters et al., 2018; Devlin et al., 2019) . However, there are still many works that explore multiple strategies to improve the PLMs. Firstly, in specialized domains (i.e biomedical domain), many works demonstrate that using indomain text (i.e. PubMed and MIMIC for biomedical domain) can further improve downstream tasks Figure 1 : An example of the biomedical sentence. Two entities \"glycerin\" and \"inflammation\" are linked to C0017861 (1,2,3-Propanetriol) and C0011603 (dermatitis) respectively with a relation triplet (C0017861, may_prevent, C0011603) in UMLS.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 142, |
|
"text": "(Peters et al., 2018;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 163, |
|
"text": "Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 444, |
|
"end": 452, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "over general-domain PLMs (Lee et al., 2020; Peng et al., 2019; Gu et al., 2020; Shin et al., 2020; Lewis et al., 2020; Alsentzer et al., 2019) . Secondly, unlike training language models (LMs) with unlabeled text, many works explore training the model with structural knowledge (i.e. triplets and facts) for better language understanding (Zhang et al., 2019; Peters et al., 2019; F\u00e9vry et al., 2020; . In this work, we propose to combine the above two strategies for a better Knowledge enhanced Biomedical pretrained Language Model (KeBioLM).", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 43, |
|
"text": "(Lee et al., 2020;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 44, |
|
"end": 62, |
|
"text": "Peng et al., 2019;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 63, |
|
"end": 79, |
|
"text": "Gu et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 98, |
|
"text": "Shin et al., 2020;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 118, |
|
"text": "Lewis et al., 2020;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 142, |
|
"text": "Alsentzer et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 358, |
|
"text": "(Zhang et al., 2019;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 379, |
|
"text": "Peters et al., 2019;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 399, |
|
"text": "F\u00e9vry et al., 2020;", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As an applied discipline that needs a lot of facts and evidence, the biomedical and clinical fields have accumulated data and knowledge from a very early age (Ashburner et al., 2000; Stearns et al., 2001) . One of the most representative work is Unified Medical Language System (UMLS) (Bodenreider, 2004 ) that contains more than 4M entities with their synonyms and defines over 900 kinds of relations. Figure 1 shows an example. There are two entities \"glycerin\" and \"inflammation\" that can be linked to C0017861 (1,2,3-Propanetriol) and C0011603 (dermatitis) respectively with a may_prevent relation in UMLS. As the most important facts in biomedical text, entities and relations can provide information for better text understanding (Xu et al., 2018; Yuan et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 182, |
|
"text": "(Ashburner et al., 2000;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 204, |
|
"text": "Stearns et al., 2001)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 303, |
|
"text": "(Bodenreider, 2004", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 753, |
|
"text": "(Xu et al., 2018;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 772, |
|
"text": "Yuan et al., 2020)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 411, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To this end, we propose to improve biomedical PLMs with explicit knowledge modeling. Firstly, we process the PubMed text to link entities to the knowledge base. We apply an entity recognition and linking tool ScispaCy to annotate 660M entities in 3.5M documents. Secondly, we implement a knowledge enhanced language model based on F\u00e9vry et al. (2020) , which performs a text-only encoding and a text-entity fusion encoding. Text-only encoding is responsible for bridging text and entities. Text-entity fusion encoding fuses information from tokens and knowledge from entities. Finally, two objectives as entity extraction and linking are added to learn better entity representations. To be noticed, we initialize the entity embeddings with TransE (Bordes et al., 2013) , which leverages not only entity but also relation information of the knowledge graph.", |
|
"cite_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 350, |
|
"text": "F\u00e9vry et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 768, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We conduct experiments on the named entity recognition (NER) and relation extraction (RE) tasks in the BLURB benchmark dataset. Results show that our KeBioLM outperforms the previous work with average scores of 87.1 and 81.2 on 5 NER datasets and 3 RE datasets respectively. Furthermore, our KeBioLM also achieves better performance in a probing task that requires models to fill the masked entity in UMLS triplets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We summary our contributions as follows 1 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose KeBioLM, a biomedical pretrained language model that explicitly incorporates knowledge from UMLS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We conduct experiments on 5 NER datasets and 3 RE datasets. Results demonstrate that our KeBioLM achieves the best performance on both NER and RE tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We collect a cloze-style probing dataset from UMLS relation triplets. The probing results show that our KeBioLM absorbs more knowledge than other biomedical PLMs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Related Work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Models like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) show the effectiveness of the paradigm of first pre-training an LM on the unlabeled text then fine-tuning the model on the downstream NLP tasks. However, direct application of the LMs pre-trained on the encyclopedia and web text usually fails on the biomedical domain, because of the distinctive terminologies and idioms. The gap between general and biomedical domains inspires the researchers to propose LMs specially tailored for the biomedical domain. BioBERT (Lee et al., 2020) is the most widely used biomedical PLM which is trained on PubMed abstracts and PMC articles. It outperforms vanilla BERT in named entity recognition, relation extraction, and question answering tasks. Jin et al. (2019) train BioELMo with PubMed abstracts, and find features extracted by BioELMo contain entity-type and relational information. Different training corpora have been used for enhancing performance of sub-domain tasks. ClinicalBERT (Alsentzer et al., 2019) , BlueBERT (Peng et al., 2019) and bio-lm (Lewis et al., 2020) utilize clinical notes MIMIC to improve clinical-related downstream tasks. SciB-ERT uses papers from the biomedical and computer science domain as training corpora with a new vocabulary. KeBioLM is trained on PubMed abstracts to adapt to PubMedrelated downstream tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 38, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 48, |
|
"end": 69, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 551, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 771, |
|
"text": "Jin et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1022, |
|
"text": "(Alsentzer et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1034, |
|
"end": 1053, |
|
"text": "(Peng et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1065, |
|
"end": 1085, |
|
"text": "(Lewis et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Biomedical PLMs", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To understand the factors in pretraining biomedical LMs, Gu et al. (2020) study pretraining techniques systematically and propose PubMedBERT pretrained from scratch with an in-domain vocabulary. Lewis et al. (2020) also find using an indomain vocabulary enhances the downstream performances. This inspires us to utilize the in-domain vocabulary for KeBioLM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 214, |
|
"text": "Lewis et al. (2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Biomedical PLMs", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "LMs like ELMo and BERT are trained to predict correlation between tokens, ignoring the meanings behind them. To capture both the textual and conceptual information, several knowledge-enhanced PLMs are proposed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge-enhanced LMs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Entities are used for bridging tokens and knowledge graphs. Zhang et al. (2019) align tokens and entities within sentences, and aggregate token and entity representations via two multi-head self-attentions. KnowBert (Peters et al., 2019) and Entity as Experts (EAE) (F\u00e9vry et al., 2020) use the entity linker to perform entity disambiguation for candidate entity spans and enhance token representations using entity embeddings. Inspired by entity-enhanced PLMs, we follow the model of EAE to inject biomedical knowledge into KeBi-oLM by performing entity detection and linking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 79, |
|
"text": "Zhang et al. (2019)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 237, |
|
"text": "(Peters et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 286, |
|
"text": "(F\u00e9vry et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge-enhanced LMs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Relation triplets provide intrinsic knowledge be-tween entity pairs. KEPLER learns the knowledge embeddings through relation triplets while pretraining. K-BERT (Liu et al., 2020) converts input sentences into sentence trees by relation triplets to infuse knowledge. In the biomedical domain, He et al. (2020) inject disease knowledge to existing PLMs by predicting diseases names and aspects on Wikipedia passages. Michalopoulos et al. (2020) use UMLS synonyms to supervise masked language modeling. We propose KeBioLM to infuse various kinds of biomedical knowledge from UMLS including but not limited to diseases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 178, |
|
"text": "(Liu et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 308, |
|
"text": "He et al. (2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 442, |
|
"text": "Michalopoulos et al. (2020)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge-enhanced LMs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this paper, we assume to access an entity set E = {e 1 , ..., e t }. For a sentence x = {x 1 , ..., x n }, we assume some spans m = (x i , ..., x j ) can be grounded to one or more entities in E. We further assume the disjuncture of these spans. In this paper, we use UMLS to set the entity set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To explicitly model both the textual and conceptual information, we follow F\u00e9vry et al. 2020and use a multi-layer self-attention network to encode both the text and entities. The model can be viewed as building the links between text and entities in the lower layers and fusing the text and entity representation in the upper layers. The overall architecture is shown in Figure 2 . To be more specific, we set the PubMedBERT (Gu et al., 2020) as our backbone. We split the layers of the backbone into two groups, performing a text-only encoding and text-entity fusion encoding respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 425, |
|
"end": 442, |
|
"text": "(Gu et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 379, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Text-only encoding. For the first group, which is closer to the input, we extract the final hidden states and perform a token-wise classification to identify if the token is at the beginning, inside, or outside of a mention (i.e., the BIO scheme). The probabilities of the B/I/O label {l i } are written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h 1 , ..., h n = Transformers 0 (x 1 , ..., x n ) (1) p(l i | x) = softmax(W l h i + b l )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After identifying the mention boundary, we maintain a function M(i) \u2192 E \u222a {NIL}, which returns the entity of the i-th token belongs. 2 We collect the mentions with a sentence x. For a mention m = (s, t), where s and t represents the starting and ending indexes of m, we encode it as the concatenation of hidden states of the boundary tokens", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 134, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h m = [h s ; h t ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For an entity e j \u2208 E in the KG, we denote its entity embedding as e j . For a mention m, we search the k nearest entities of its projected representation h m = W m h m + b m in the entity embedding space, obtaining a set of entities E . The normalized similarity between h m and e j is calculated as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a j = exp(h m \u2022 e j ) e k \u2208E exp(h m \u2022 e k )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The additional entity representation e m of m is calculated as a weighted sum of the embeddings e m = e j \u2208E a j \u2022 e j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Text-entity fusion encoding. After getting the mentions and entities, we fuse the entity embeddings with the text embedding by summation. For the i-th token, the entity-enhanced embedding is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h * i = h i + (W e e m + b e ) , \u2203m, M(i) = m, h i , otherwise.", |
|
"eq_num": "(4" |
|
} |
|
], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ") M(i) = m represents the i-th token belong to entity e m . The sequence of h * 1 , ..., h * n is then fed into the second group of transformer layers to generate text-entity representations. The final hidden states h f i are calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h f 1 , ..., h f n = Transformers 1 (h * 1 , ..., h * n ) (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We have three pretraining tasks for KeBioLM. Masked language modeling is a cloze-style task for predicting masked tokens. Since the entities are the main focus of our model, we add two tasks as entity detection and linking respectively following F\u00e9vry et al. (2020) . Finally, we jointly minimize the following loss:", |
|
"cite_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 265, |
|
"text": "F\u00e9vry et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "L = L M LM + L ED + L EL (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Masked Language Modeling Like BERT and other LMs, we predict the masked tokens {x i } in inputs using the final hidden representations {h f i }. The loss L M LM is calculated based on the crossentropy of masked and predicted tokens: Whole word masking is successful in training masked language models (Devlin et al., 2019; Cui et al., 2019) . In the biomedical domain, entities are the semantic units of texts. Therefore, we extend this technique to whole entity masking. We mask all tokens within a word or entity span. KeBioLM replaces 12% of tokens to [MASK] and 1.5% tokens to random tokens. This is more difficult for models to recover tokens, which leads to learning better entity representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 322, |
|
"text": "(Devlin et al., 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 340, |
|
"text": "Cui et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p M (x i | x) = softmax(W m h f i + b m )", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L M LM = \u2212 log p M (x i | x)", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Entity Detection Entity detection is an important task in biomedical NLP to link the tokens to entities. Thus, We add an entity detection loss by calculating the cross-entropy for BIO labels:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L ED = n i=1 \u2212 log p(l i | x)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Entity Linking One medical entity in different names linking to the same index permits the model to learn better text-entity representations. To link mention {m} in texts with entities {e} in entity set E, we calculate the cross-entropy loss using similarities between {h m } and entities in E:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L EL = \u2212 log exp(h m \u2022 e) e j \u2208E exp(h m \u2022 e j )", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Pretraining Tasks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given a sentence S from PubMed content, we need to recognize entities and link them to the UMLS knowledge base. We use ScispaCy , a robust biomedical NER and entity linking model, to annotate the sentence. Unlike previous work (Vashishth et al., 2020 ) that only retains recognized entities in a subset of Medical Subject Headings (MeSH) (Lipscomb, 2000) , we relax the restriction to annotate all entities to UMLS 2020 AA release 3 whose linking scores are higher than a threshold of 0.85.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "(Vashishth et al., 2020", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 354, |
|
"text": "(Lipscomb, 2000)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Creation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section, we first introduce the pretraining details of KeBioLM. Then we introduce the BLURB datasets for evaluating our approach. Finally, we introduce a probing dataset based on UMLS triplets for evaluating knowledge modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use ScispaCy to acquire 477K CUIs and 660M entities among 3.5M PubMed documents 4 from PubMedDS dataset (Vashishth et al., 2020) as training corpora. We initialize entity embeddings by TransE (Bordes et al., 2013) which learns embeddings from relation triplets. Relation triplets come from UMLS ,203 5,347 5,385 15,935 10,373 8,993 BC5dis 4,182 4,244 4,424 12,850 8,846 3,878 NCBI 5,137 787 960 6,884 1, The parameters of transformers in KeBioLM are initialized from the checkpoint of PubMedBERT. We also use the vocabulary from PubMedBERT. AdamW (Loshchilov and Hutter, 2017) is used as the optimizer for KeBioLM with 10,000 steps warmup and linear decay. We use an 8-layer transformer for text-only encoding and a 4-layer transformer for text-entity fusion encoding. We set the learning rate to 5e-5, batch size to 512, max sequence length to 512, and training epochs to 2. For each input sequence, we limit the max entities count to 50 and the excessive entities will be truncated. To generate entity representation e m , the most k = 100 similar entities are used. We train our model with 8 NVIDIA 16GB V100 GPUs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 131, |
|
"text": "(Vashishth et al., 2020)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 597, |
|
"text": "(Loshchilov and Hutter, 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 424, |
|
"text": ",203 5,347 5,385 15,935 10,373 8,993 BC5dis 4,182 4,244 4,424 12,850 8,846 3,878 NCBI 5,137 787 960 6,884 1,", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pretraining Details", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "#Train #Dev #Test #Ments #Ments (UMLS) #Ments (KeBioLM) BC5chem 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining Details", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this section, we evaluate KeBioLM on NER tasks and RE tasks of the BLURB benchmark 5 (Gu et al., 2020) . For all tasks, we use the preprocessed version from BLURB. We measure the NER and RE datasets in terms of F1-score. Table 1 shows the counts of training instances in BLURB datasets (i.e., annotated mentions for NER datasets and sentences with two mentions for RE datasets). We also report the count of annotated mentions overlapping with the UMLS 2020 release and Ke-BioLM in each dataset. The percentage of men-5 https://microsoft.github.io/BLURB/ tions overlapping with KeBioLM ranges from 8.7% (NCBI-disease) to 58.5% (DDI) which indicates that KeBioLM learns entity knowledge related to downstream tasks. JNLPBA (Collier and Kim, 2004) includes 2,000 PubMed abstracts to identify molecular biology-related entities. We ignore entity types in JNLPBA following Gu et al. (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 105, |
|
"text": "(Gu et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 747, |
|
"text": "(Collier and Kim, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 871, |
|
"end": 887, |
|
"text": "Gu et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 231, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ChemProt (Krallinger et al., 2017) classifies the relation between chemicals and proteins within sentences from PubMed abstracts. Sentences are classified into 6 classes including a negative class.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 34, |
|
"text": "(Krallinger et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation Extraction", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "DDI (Herrero-Zazo et al., 2013) is a RE dataset with sentence-level drug-drug relation on PubMed abstracts. There are four classes for relation: advice, effect, mechanism, and false.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation Extraction", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "GAD (Bravo et al., 2015 ) is a gene-disease relation binary classification dataset collected from PubMed sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 23, |
|
"text": "(Bravo et al., 2015", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation Extraction", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "NER We follow Gu et al. (2020) BIO tagging scheme and ignore the entity types in NER datasets. We classify labels of tokens by a linear layer on top of the hidden representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 30, |
|
"text": "Gu et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuning Details", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "RE We replace the entity mentions in RE datasets with entity indicators like @DISEASE$ or @GENE$ to avoid models classifying relations by memorizing entity names. We add these entity indicators into the vocabulary of LMs. We concatenate the representation of two concerned entities and feed it into a linear layer for relation classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuning Details", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Parameters We adopt AdamW as the optimizer with a 10% steps linear warmup and a linear decay. We search the hyperparameters of learning rate among 1e-5, 3e-5, and 5e-5. We fine-tune the model for 60 epochs. We evaluate the model at the end of each epoch and choose the best model according to the evaluation score on the development set. We set batch size as 16 when fine-tuning. The maximal input lengths are 512 for all NER datasets. We truncate ChemProt and DDI to 256 tokens, and GAD to 128 tokens. To perform a fair comparison, we fine-tune our model with 5 different seeds and report the average score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-tuning Details", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We compare KeBioLM with following base-size biomedical PLMs on the above-mentioned datasets:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "BioBERT (Lee et al., 2020) , SciBERT , ClinicalBERT (Alsentzer et al., 2019) , BlueBERT (Peng et al., 2019) , bio-lm (Lewis et al., 2020) , diseaseBERT (He et al., 2020) , and Pub-", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 26, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 52, |
|
"end": 76, |
|
"text": "(Alsentzer et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 107, |
|
"text": "(Peng et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 137, |
|
"text": "(Lewis et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 169, |
|
"text": "(He et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "MedBERT (Gu et al., 2020) 6 . Table 2 shows the main results on NER and RE datasets of the BLURB benchmark. In addition, we report the average scores for NER and RE tasks respectively. KeBioLM achieves state-ofthe-art performance for NER and RE tasks. Compared with the strong baseline BioBERT, KeBi-oLM shows stable improvements in NER and RE datasets (+1.1 in NER, +1.9 in RE). Compared with our baseline model PubMedBERT, KeBioLM performs significantly better in BC5dis, NCBI, JNLPBA, ChemProt, and GAD (p \u2264 0.05 based on one-sample t-test) and achieves better average scores (+0.8 in NER, +0.6 in RE). DiseaseBERT is a model carefully designed for predicting disease names and aspects, which leads to better performance in the BC5dis dataset (+0.4). They only report the promising results in disease-related tasks, however, our model obtains consistent promising performances across all kinds of biomedical tasks. In the BC2GM dataset, KeBioLM outperforms our baseline model PubMedBERT and other PLMs except for bio-lm, and the standard deviation of the BC2GM task is evidently larger than other tasks. Another exception is the DDI dataset, we observe a slight performance degradation compared to PubMedBERT (-0.5). The average performances demonstrate that fusing entity knowledge into the LM boosts the performances across the board. Table 3 : Ablation studies for KeBioLM architecture on the BLURB benchmark. We use -wem, +rand and +frz to represent pretraining setting (a), (b) and (c), respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 27, |
|
"text": "(Gu et al., 2020) 6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 37, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1340, |
|
"end": 1347, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We conduct ablation tests to validate the effectiveness of each part in KeBioLM. We pretrain the model with the following settings and reuse the same parameters described above: (a) Remove whole entity masking and retain whole word masking while pretraining (-wem); (b) Initialize entity embeddings randomly (+rand); (c) Initialize entity embeddings by TransE and freeze the entity embeddings while pretraining (+frz).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Test", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "In Table 3 , we observe the following results. Firstly, comparing KeBioLM with setting (a) shows that whole entity masking boosting the performances consistently in all datasets (+0.5 in NER, +0.9 in RE). Secondly, comparing KeBioLM with setting (b) indicates initializing the entity embeddings randomly degrades performances in NER tasks and RE tasks (-0.4 in NER, -1.2 in RE). Entity embeddings initialized by TransE utilize relation knowledge in UMLS and enhance the results. Thirdly, freezing the entity embeddings in setting (c) reduces the performances on all datasets compared to KeBioLM except BC2GM (-0.4 in NER, -1.1 in RE). This indicates that updating entity embedding while pretraining helps KeBioLM to have better text-entity representations, and this leads to better downstream performances.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Test", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "To evaluate how the count of transformer layers affects our model, we pretrain KeBioLM with the different number of layers. For the convenience of notation, denote l 0 is the layer count of text-only encoding and l 1 is the layer count of text-entity fusion encoding. We have the following settings: (i) l 0 = 8, l 1 = 4 (our base model), (ii)l 0 = 4, l 1 = 8, (iii)l 0 = 12, l 1 = 0 (without the second group of transformer layers, {h i } are used for token representations). Results are shown in Table 4 . Our base model (i) has better performance than setting (ii) (+0.3 in NER, +0.7 in RE). Training setting (iii) is equal to a traditional BERT model with additional entity extraction and entity linking tasks. The comparison with (i) and (iii) indicates that text-entity representations have better performances than textonly representations (+0.5 in NER, +0.9 in RE) in the same amount of parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 498, |
|
"end": 505, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Test", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "l 0 = 8 l 1 = 4 l 0 = 4 l 1 = 8 l 0 = 12 l 1 = 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Test", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We establish a probing dataset based on UMLS triplets to evaluate how LMs understand medical knowledge via pretraining.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "UMLS Knowledge Probing", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "UMLS triplets are stored in the form of (s, r, o) where s and o are CUIs in UMLS and r is a relation type. We generate two queries for one triplet based on names of CUIs and relation type:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Q 1 : [CLS] s r [MASK] [SEP] \u2022 Q 2 : [CLS] [MASK] r o [SEP]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "For example, we sample a triplet and terms of corresponded entities (C0048038:apraclonidine, may_prevent, C0028840:ocular hypertension). We remove the underscores of relation names and generate two queries (we omit [CLS] and [SEP]):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Q 1 : apraclonidine may prevent [MASK].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Q 2 : [MASK] may prevent ocular hypertension.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "#Queries #Relations #Avg. CUIs 143,771 922 2.39 Table 5 : The number of generated UMLS relation probing dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "For relation names end with \"of\", \"as\" , and \"by\", we add \"is\" in front of relation names. For instance, translation_of is converted to is translation of, classified_as is converted to is classified as, and used_by is converted to is used by. Commonly, different relation triplets can generate same query since triplets may overlap (s, r, \u2212) or (\u2212, r, o) with each other. We deduplicate all repeat queries and randomly choose at most 200 queries from all relation types in UMLS. After deduplication, one query can have multiple CUIs as answers. For example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Q: [MASK] may treat essential tremor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "\u2022 A 1 : C0282321: propranolol hydrochloride", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "\u2022 A 2 : C0033497: propranolol", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "We summarize our generated UMLS relation probing dataset in Table 5 . Unlike LAMA (Petroni et al., 2019) and X-FACTR (Jiang et al., 2020 ) that contain less than 50 kinds of relation, our probing task is a more difficult task requiring a model to decode entities over 900 kinds of relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 104, |
|
"text": "(Petroni et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "(Jiang et al., 2020", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 67, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probing Dataset", |
|
"sec_num": "4.6.1" |
|
}, |
|
{ |
|
"text": "To probe PLMs using generated queries, we require models to recover the masked tokens. Since biomedical entities are usually formed by multiple words and each word can be tokenized into several wordpieces (Wu et al., 2016) , models have to recover multiple [MASK] tokens. We limit the max length of one entity is 10 for decoding. We decode the multi [MASK] tokens using the confidence-based method described in Jiang et al. (2020) . We also implement a beam search for decoding. Unlike beam search in machine translation that decodes tokens from left to right, we decode tokens in an arbitrary order. For each step, we calculate the probabilities of all undecoded masked tokens based on original input and decoded tokens. We predict only one token within undecoded tokens with the top B = 5 accumulated log probabilities. Decoding will be accomplished after count of [MASK] times iterations and we keep the best B = 5 decoding results. We skip the refinement stage since it is time-consuming and does not significantly improve the results. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 222, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 430, |
|
"text": "Jiang et al. (2020)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi [MASK] Decoding", |
|
"sec_num": "4.6.2" |
|
}, |
|
{ |
|
"text": "Since multiple correct CUIs exist for one query, we consider a model answering the query correctly if any decoded tokens in any [MASK] length hit any of the correct CUIs. We evaluate the probing results by the relation-level macro-recall@5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "4.6.3" |
|
}, |
|
{ |
|
"text": "We classify probing queries into two types based on their difficulties. Type 1: answers within queries (24,260 queries); Type 2: answers not in queries (119,511 queries). Here are examples of Type 1 (Q 1 and A 1 ) and Type 2 (Q 2 and A 2 ) queries:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Results", |
|
"sec_num": "4.6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Q 1 : [MASK] has form tacrolimus monohydrate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Results", |
|
"sec_num": "4.6.4" |
|
}, |
|
{ |
|
"text": "\u2022 A 1 : C0085149: tacrolimus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Results", |
|
"sec_num": "4.6.4" |
|
}, |
|
{ |
|
"text": "\u2022 Q 2 : cosyntropin may diagnose [MASK].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Results", |
|
"sec_num": "4.6.4" |
|
}, |
|
{ |
|
"text": "\u2022 A 2 : C0001614: adrenal cortex disease Table 6 summarizes the probing results of different PLMs according to query types. Checkpoints of BioBERT and PubMedBERT miss a cls/predictions layer and cannot perform the probe directly. Compared to other PLMs, KeBioLM achieves the best scores in both two types and obviously outperforms BlueBERT and ClincalBERT with a large margin, which indicates that KeBioLM learns more medical knowledge. Table 7 lists some probing examples. SciBERT can decode medical entities for [MASK] tokens which may be unrelated. KeBioLM decodes relation correctly and is aware of the synonyms of hepatic. KeBioLM states that Vaccination may prevent tetanus which is a correct but not precise statement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 520, |
|
"text": "[MASK]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 48, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 444, |
|
"text": "Table 7", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probing Results", |
|
"sec_num": "4.6.4" |
|
}, |
|
{ |
|
"text": "In this paper, we propose to improve biomedical pretrained language models with knowledge. We propose KeBioLM which applies text-only encoding and text-entity fusion encoding and has two additional entity-related pretraining tasks: entity detection and entity linking. Extensive experiments have shown that KeBioLM outperforms other PLMs on NER and RE datasets of the BLURB benchmark. We further probe biomedical PLMs by querying UMLS relation triplets, which indicates KeBioLM absorbs more biomedical knowledge than others. In this work, we only leverage the relation information in TransE to initialize the entity embeddings. We will further investigate how to directly incorporate the relation information into LMs in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our codes and model can be found at https:// github.com/GanjinZero/KeBioLM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NIL is returned when there is no entity being matched.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.nlm.nih.gov/research/ umls/licensedcontent/umlsarchives04. html#2020AA4 The count of documents in PubMedDS is based on https://arxiv.org/pdf/2005.00460v1.pdf.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use BioBERT v1.1, SciBERT-scivocab-uncased, Bio-ClinicalBERT, BlueBERT-pubmed-mimic, bio-lm(RoBERTabase-PM-M3-Voc), diseaseBERT-biobert and PubMedBERTabstract versions for comparison.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by Alibaba Group through Alibaba Research Intern Program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Publicly available clinical BERT embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Alsentzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Boag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Hung", |
|
"middle": [], |
|
"last": "Weng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jindi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Mcdermott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "72--78", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-1909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Gene ontology: tool for the unification of biology", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ashburner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ball", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Judith", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Blake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Botstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heather", |
|
"middle": [], |
|
"last": "Butler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allan", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kara", |
|
"middle": [], |
|
"last": "Dolinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Selina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dwight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Janan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eppig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Nature genetics", |
|
"volume": "25", |
|
"issue": "1", |
|
"pages": "25--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Ashburner, Catherine A Ball, Judith A Blake, David Botstein, Heather Butler, J Michael Cherry, Allan P Davis, Kara Dolinski, Selina S Dwight, Janan T Eppig, et al. 2000. Gene ontology: tool for the unification of biology. Nature genetics, 25(1):25-29.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "SciB-ERT: A pretrained language model for scientific text", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3615--3620", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1371" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The unified medical language system (umls): integrating biomedical terminology", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Bodenreider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Nucleic acids research", |
|
"volume": "32", |
|
"issue": "suppl_1", |
|
"pages": "267--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Bodenreider. 2004. The unified medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research, 32(suppl_1):D267- D270.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Translating embeddings for modeling multirelational data", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garcia-Duran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oksana", |
|
"middle": [], |
|
"last": "Yakhnenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Neural Information Processing Systems (NIPS), pages 1-9.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research", |
|
"authors": [ |
|
{ |
|
"first": "\u00c0lex", |
|
"middle": [], |
|
"last": "Bravo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Pi\u00f1ero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N\u00faria", |
|
"middle": [], |
|
"last": "Queralt-Rosinach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Rautschka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Furlong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "BMC bioinformatics", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "1--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "\u00c0lex Bravo, Janet Pi\u00f1ero, N\u00faria Queralt-Rosinach, Michael Rautschka, and Laura I Furlong. 2015. Ex- traction of relations between genes and diseases from text and large-scale data analysis: implica- tions for translational research. BMC bioinformat- ics, 16(1):1-17.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Introduction to the bio-entity recognition task at JNLPBA", |
|
"authors": [ |
|
{ |
|
"first": "Nigel", |
|
"middle": [], |
|
"last": "Collier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin-Dong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nigel Collier and Jin-Dong Kim. 2004. Introduc- tion to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Pre-training with whole word masking for chinese bert", |
|
"authors": [ |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziqing", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoping", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.08101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Ncbi disease corpus: a resource for disease name recognition and concept normalization", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Rezarta Islamaj Dogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Leaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Entities as experts: Sparse memory access with entity supervision", |
|
"authors": [ |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "F\u00e9vry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baldini", |
|
"middle": [], |
|
"last": "Livio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Soares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Fitzgerald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4937--4951", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.400" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thibault F\u00e9vry, Livio Baldini Soares, Nicholas FitzGer- ald, Eunsol Choi, and Tom Kwiatkowski. 2020. En- tities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 4937-4951, Online. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Tinn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Lucas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoto", |
|
"middle": [], |
|
"last": "Usuyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.15779" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedi- cal natural language processing. arXiv preprint arXiv:2007.15779.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziwei", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Caverlee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4604--4614", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.372" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yun He, Ziwei Zhu, Yin Zhang, Qin Chen, and James Caverlee. 2020. Infusing Disease Knowledge into BERT for Health Question Answering, Medical In- ference and Disease Name Recognition. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4604-4614, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions", |
|
"authors": [ |
|
{ |
|
"first": "Mar\u00eda", |
|
"middle": [], |
|
"last": "Herrero-Zazo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabel", |
|
"middle": [], |
|
"last": "Segura-Bedmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paloma", |
|
"middle": [], |
|
"last": "Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Declerck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "46", |
|
"issue": "5", |
|
"pages": "914--920", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mar\u00eda Herrero-Zazo, Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Thierry Declerck. 2013. The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Journal of biomedical informatics, 46(5):914-920.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "X-FACTR: Multilingual factual knowledge retrieval from pretrained language models", |
|
"authors": [ |
|
{ |
|
"first": "Zhengbao", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Araki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haibo", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5943--5959", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.479" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020. X-FACTR: Multilingual factual knowledge retrieval from pre- trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5943-5959, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Probing biomedical embeddings from language models", |
|
"authors": [ |
|
{ |
|
"first": "Qiao", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhuwan", |
|
"middle": [], |
|
"last": "Dhingra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinghua", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "82--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiao Jin, Bhuwan Dhingra, William Cohen, and Xinghua Lu. 2019. Probing biomedical embeddings from language models. In Proceedings of the 3rd Workshop on Evaluating Vector Space Representa- tions for NLP, pages 82-89.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Overview of the biocreative vi chemical-protein interaction track", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Krallinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Obdulia", |
|
"middle": [], |
|
"last": "Rabal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Saber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mart\u0131n", |
|
"middle": [], |
|
"last": "Akhondi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jes\u00fas", |
|
"middle": [], |
|
"last": "P\u00e9rez P\u00e9rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gael", |
|
"middle": [ |
|
"P\u00e9rez" |
|
], |
|
"last": "Santamar\u00eda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the sixth BioCreative challenge evaluation workshop", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "141--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Krallinger, Obdulia Rabal, Saber A Akhondi, Mart\u0131n P\u00e9rez P\u00e9rez, Jes\u00fas Santamar\u00eda, Gael P\u00e9rez Rodr\u00edguez, et al. 2017. Overview of the biocreative vi chemical-protein interaction track. In Proceed- ings of the sixth BioCreative challenge evaluation workshop, volume 1, pages 141-146.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Pretrained language models for biomedical and clinical tasks: Understanding and extending the state-of-the-art", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 3rd Clinical Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "146--157", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.clinicalnlp-1.17" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Lewis, Myle Ott, Jingfei Du, and Veselin Stoy- anov. 2020. Pretrained language models for biomed- ical and clinical tasks: Understanding and extend- ing the state-of-the-art. In Proceedings of the 3rd Clinical Natural Language Processing Workshop, pages 146-157, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database", |
|
"authors": [ |
|
{ |
|
"first": "Jiao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yueping", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Robin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniela", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Hsuan", |
|
"middle": [], |
|
"last": "Sciaky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allan", |
|
"middle": [ |
|
"Peter" |
|
], |
|
"last": "Leaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mattingly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Wiegers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Medical subject headings (mesh)", |
|
"authors": [ |
|
{ |
|
"first": "Carolyn", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Lipscomb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Bulletin of the Medical Library Association", |
|
"volume": "88", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carolyn E Lipscomb. 2000. Medical subject headings (mesh). Bulletin of the Medical Library Association, 88(3):265.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph", |
|
"authors": [ |
|
{ |
|
"first": "Weijie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiruo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Ju", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "2901--2908", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1609/aaai.v34i03.5681" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. Proceedings of the AAAI Conference on Arti- ficial Intelligence, 34(03):2901-2908.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Umlsbert: Clinical domain knowledge augmentation of contextual embeddings using the unified medical language system metathesaurus", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Michalopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanxin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hussam", |
|
"middle": [], |
|
"last": "Kaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Michalopoulos, Yuanxin Wang, Hussam Kaka, Helen Chen, and Alex Wong. 2020. Umlsbert: Clin- ical domain knowledge augmentation of contextual embeddings using the unified medical language sys- tem metathesaurus.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "ScispaCy: Fast and robust models for biomedical natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--327", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Pro- ceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets", |
|
"authors": [ |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shankai", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--65", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58- 65, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Knowledge enhanced contextual word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Logan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vidur", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1005" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 43-54, Hong Kong, China. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Petroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Bakhtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxiang", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2463--2473", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "BioMegatron: Larger biomedical domain language model", |
|
"authors": [ |
|
{ |
|
"first": "Hoo-Chang", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evelina", |
|
"middle": [], |
|
"last": "Bakhturina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raul", |
|
"middle": [], |
|
"last": "Puri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mostofa", |
|
"middle": [], |
|
"last": "Patwary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Shoeybi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghav", |
|
"middle": [], |
|
"last": "Mani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4700--4706", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.379" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoo-Chang Shin, Yang Zhang, Evelina Bakhturina, Raul Puri, Mostofa Patwary, Mohammad Shoeybi, and Raghav Mani. 2020. BioMegatron: Larger biomedical domain language model. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4700-4706, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Overview of biocreative ii gene mention recognition", |
|
"authors": [ |
|
{ |
|
"first": "Larry", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lorraine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rie", |
|
"middle": [], |
|
"last": "Tanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheng-Ju", |
|
"middle": [], |
|
"last": "Johnson Nee Ando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I-Fang", |
|
"middle": [], |
|
"last": "Kuo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chun-Nan", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu-Shi", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Klinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Friedrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Genome biology", |
|
"volume": "9", |
|
"issue": "2", |
|
"pages": "1--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Larry Smith, Lorraine K Tanabe, Rie Johnson nee Ando, Cheng-Ju Kuo, I-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M Friedrich, Kuzman Ganchev, et al. 2008. Overview of biocreative ii gene mention recognition. Genome biology, 9(2):1-19.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Snomed clinical terms: overview of the development process and project status", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Stearns", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Price", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Spackman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the AMIA Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Q Stearns, Colin Price, Kent A Spackman, and Amy Y Wang. 2001. Snomed clinical terms: overview of the development process and project sta- tus. In Proceedings of the AMIA Symposium, page 662. American Medical Informatics Association.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Medtype: Improving medical entity linking with semantic type prediction", |
|
"authors": [ |
|
{ |
|
"first": "Shikhar", |
|
"middle": [], |
|
"last": "Vashishth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ritam", |
|
"middle": [], |
|
"last": "Dutt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Newman-Griffis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.00460" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Ritam Dutt, Denis Newman-Griffis, and Carolyn Rose. 2020. Medtype: Improving medical entity linking with semantic type prediction. arXiv preprint arXiv:2005.00460.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Kepler: A unified model for knowledge embedding and pretrained language representation", |
|
"authors": [ |
|
{ |
|
"first": "Xiaozhi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyu", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaocheng", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juanzi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.06136" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2019. Kepler: A unified model for knowledge embedding and pre- trained language representation. arXiv preprint arXiv:1911.06136.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Google's neural machine translation system", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Bridging the gap between human and machine translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.08144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Leveraging biomedical resources in bi-lstm for drug-drug interaction extraction", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Access", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "33432--33439", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ACCESS.2018.2845840" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Xu, X. Shi, Z. Zhao, and W. Zheng. 2018. Leverag- ing biomedical resources in bi-lstm for drug-drug in- teraction extraction. IEEE Access, 6:33432-33439.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Coder: Knowledge infused cross-lingual medical term embedding for term normalization", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengyun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2011.02947" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Yuan, Zhengyun Zhao, and Sheng Yu. 2020. Coder: Knowledge infused cross-lingual medical term embedding for term normalization. arXiv preprint arXiv:2011.02947.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "ERNIE: Enhanced language representation with informative entities", |
|
"authors": [ |
|
{ |
|
"first": "Zhengyan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1441--1451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1139" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The overall architecture of KeBioLM." |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "BC5-chem & BC5-disease(Li et al., 2016) contain 1500 PubMed abstracts for extracting chemical and disease entities respectively.NCBI-disease (Dogan et al., 2014) includes 793PubMed abstracts to detect disease entities.BC2GM(Smith et al., 2008) contains 20K PubMed sentences to extract gene entities." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "The training instances (mentions for NER tasks and sentences with two entities for RE tasks) and the mention counts of NER and RE datasets preprocessed in BLURB benchmark respectively. The mention counts overlapping with UMLS 2020 AA release and KeBioLM are also listed. For the GAD dataset, annotated mentions do not appear in the BLURB preprocessed version." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "F1-scores on NER and RE tasks in BLURB benchmark. Standard deviations of KeBioLM are reported across five runs. Results of diseaseBERT-biobert and bio-lm come from their corresponded papers. Others are copied from BLURB. * indicates that p \u2264 0.05 of one-sample t-test which compares whether the mean performance of KeBioLM is better than PubMedBERT. \u2020 Bio-lm applies different metrics with BLURB (micro F1 v.s." |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Ablation studies for transformer layers count in KeBioLM on the BLURB benchmark." |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Results of the probing test in terms of Recall@5." |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Probing examples of UMLS relation triplets. Queries and answer CUIs are listed. We only list one correct CUI for each query. For each model, one [MASK] token decoding result and an example of multi [MASK] decoding result are displayed. Bold text represents a term of the answer CUI." |
|
} |
|
} |
|
} |
|
} |