{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:07.028076Z"
},
"title": "BioELECTRA:Pretrained Biomedical text Encoder using Discriminators",
"authors": [
{
"first": "Kamal",
"middle": [
"Raj"
],
"last": "Kanakarajan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SAAMA AI Research Lab",
"location": {
"settlement": "Chennai",
"country": "India"
}
},
"email": ""
},
{
"first": "Bhuvana",
"middle": [],
"last": "Kundumani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SAAMA AI Research Lab",
"location": {
"settlement": "Chennai",
"country": "India"
}
},
"email": "bhuvana.kundumani@saama.com"
},
{
"first": "Malaikannan",
"middle": [],
"last": "Sankarasubbu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SAAMA AI Research Lab",
"location": {
"settlement": "Chennai",
"country": "India"
}
},
"email": "malaikannan.sankarasubbu@saama.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce Bio-ELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELEC-TRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce Bio-ELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELEC-TRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Following the success of BERT (Devlin et al., 2018) (Bidirectional Encoder Representations from Transformers) in the general domain, the pretrain-andfinetune approach has been used in the Biomedical domain. With large scale free text available from PubMed and PubMed central (millions of articles), biomedical domain has large unlabelled domainspecific corpus. However, the biomedical domain has labelled datasets that are very small compared to the general domain. Thus the transfer learning approach is well suited for Biomedical domain.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the biomedical domain, BioBERT (Lee et al., 2020) , BlueBERT (Peng et al., 2019) and Clinical-BERT (Alsentzer et al., 2019) are the initial models based on BERT. These models follow continual pretraining approach where the model weights are initialised with weights from BERT trained on Wikipedia and Book Corpus and uses the same vocabulary. Recent models SciBERT , PubMedBERT (Gu et al., 2020) and Biolm (Lewis et al., 2020) have shown that pretrain-ing from scratch using domain specific corpora along with domain specific vocabulary improves the model performance significantly.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 64,
"end": 83,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 102,
"end": 126,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 381,
"end": 398,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 409,
"end": 429,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we adapt ELECTRA (Clark et al., 2020) , a recent and powerful general domain model for the biomedical domain and we release Bio-ELECTRA -a biomedical domain specific language encoder model. We follow the domain specific pretraining approach where the ELECTRA model is pretrained on PubMed and PubMed Central (PMC) full text articles. ELECTRA outperforms BERT, ALBERT (Lan et al., 2019) , XLNet (Yang et al., 2020) and RoBERTa on the GLUE (Wang et al., 2019) Benchmark and SQuAD (Rajpurkar et al., 2016a) .",
"cite_spans": [
{
"start": 31,
"end": 51,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 381,
"end": 399,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 408,
"end": 427,
"text": "(Yang et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 452,
"end": 471,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 492,
"end": 517,
"text": "(Rajpurkar et al., 2016a)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we make the following contributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We release BioELECTRA(P), BioELEC-TRA(P + F), BioELECTRA(P + F) LT(Longer Training of additional 1M steps) and Bio-ELECTRA(W + P) pretrained from scratch using Biomedical domain text. Pretrained weights for all these models are publicly released through huggingface transformers (Wolf et al., 2020) model hub.",
"cite_spans": [
{
"start": 282,
"end": 301,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We evaluate our BioELECTRA models on all the 13 datasets in the BLURB (Gu et al., 2020) benchmark and on all the 4 clinical datasets from BLUE (Peng et al., 2019) benchmark across 7 NLP tasks.",
"cite_spans": [
{
"start": 73,
"end": 90,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 146,
"end": 165,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. BioELECTRA model achieves state-of-theart (SOTA) results on all the 13 datasets in BLURB benchmark and achieves SOTA on all the Clinical datasets from BLUE Benchmark.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. We publicly release the code 1 and parameters to reproduce our research results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pretrained word embeddings (Mikolov et al., 2013) , (Pennington et al., 2014) and contextualised word embeddings have helped the deep learning algorithms to improve their performance in NLP tasks. ULMFiT (Howard and Ruder, 2018) , introduces the transfer learning approach to Natural language processing and Ope-nAI GPT (Radford et al., 2018) , pretrains a transformer (Vaswani et al., 2017) for learning general language representations. Similar to ULM-FiT and OpenAI GPT, BERT (Devlin et al., 2018) follows this fine tuning approach and introduces a powerful bidirectional language representation model using the transformer based model architecture. BERT achieves SOTA on most NLP tasks without any heavily-engineered task specific architectures. Following the success of BERT, XLNet (Yang et al., 2020) with generalized autoregressive pretraining and RoBERTa with robust pretraining techniques experiment with different pretraining objectives. ALBERT (Lan et al., 2019) uses weight sharing and embedding factorisation to reduce memory consumption and increase the training speed. ELECTRA (Clark et al., 2020) introduces sample-efficient 'replaced token detection' pretraining technique. ELECTRAsmall, trained with very little compute outperforms GPT and performs comparably with larger models like RoBERTa and XLNet.",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 52,
"end": 77,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 204,
"end": 228,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF13"
},
{
"start": 320,
"end": 342,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF32"
},
{
"start": 369,
"end": 391,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 479,
"end": 500,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 787,
"end": 806,
"text": "(Yang et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 955,
"end": 973,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1092,
"end": 1112,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Recent works adapt BERT to scientific, biomedical and clinical domains. BioBERT (Lee et al., 2020) pretrains BERT with data from PubMed and PubMed Central (PMC) articles. BlueBERT (Peng et al., 2019) pretrains BERT on PubMed, PMC and MIMIC III (Johnson et al., 2016) data. ClinicalBERT (Alsentzer et al., 2019) initialises with BioBERT weights and pretrains on data from MIMIC III. SciBERT , Pub-MedBERT (Gu et al., 2020) and Bio-lm (Lewis et al., 2020) pretrain BERT based models from scratch with domain specific data. SciBERT pretrains on 1.14M papers from Semantic Scholar (Ammar et al., 2018) , PubMedBERT on PubMed and PMC data and Bio-lm (Lewis et al., 2020) ",
"cite_spans": [
{
"start": 80,
"end": 98,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 180,
"end": 199,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 240,
"end": 266,
"text": "III (Johnson et al., 2016)",
"ref_id": null
},
{
"start": 286,
"end": 310,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 404,
"end": 421,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 433,
"end": 453,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 577,
"end": 597,
"text": "(Ammar et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 645,
"end": 665,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The pioneers in applying transfer learning to NLP, pretrain Language Model(LM) on unlabelled large corpora in the general domain like Wikipedia articles, Web Text, Books corpus, Gigaword, web crawl etc. Biomedical literature has specific concepts and terms that are not part of the general domain. To enable the models to learn these features very specific to the biomedical domain, BioNLP models, BioBERT (Lee et al., 2020) and BlueBERT (Peng et al., 2019) use the mixed-domain pretraining approach (Gu et al., 2020) . In mixed-domain approach, the model initialises with BERT weights and vocabulary trained on general domain text and the model is pretrained on the biomedical text.",
"cite_spans": [
{
"start": 406,
"end": 424,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 438,
"end": 457,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 500,
"end": 517,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining from scratch using domain specific corpora",
"sec_num": "3.1"
},
{
"text": "Biomedical domain with its publicly available literature which is growing exponentially by the year makes it well suited for domain specific pretraining from scratch. Using a general domain vocabulary for biomedical text results in complex and specific terms being split into numerous subwords, as they do not exist in the general domain vocabulary. Hence a model trained on these word pieces might not generalise well for the domain specific downstream tasks. Recent work PubMed-BERT (Gu et al., 2020) and Bio-lm (Lewis et al., 2020 ) pretrain a language model from scratch on PubMed abstracts and use the vocabulary that is generated from PubMed abstracts. These models outperform the BioBERT and BlueBERT models on biomedical and clinical NLP tasks .",
"cite_spans": [
{
"start": 485,
"end": 502,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 514,
"end": 533,
"text": "(Lewis et al., 2020",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining from scratch using domain specific corpora",
"sec_num": "3.1"
},
{
"text": "We use data very similar to PubMedBERT for fair comparison. PubMed Abstracts We use text from 22 million PubMed abstracts downloaded as of January 2021. 27 GB of cleaned text with approximately 4.2 billion words are used. PubMed Central (PMC) We obtained full text from 3.2 million PubMed Central (PMC) 2 articles as of January 2021. After cleaning the data, we use 57GB of text with approximately 9.6 billion words. Preprocessing We used pubmed_parser parser 3 for extracting the abstracts and full text articles. We used SciSpacy (Neumann et al., 2019) for sentence tokenization.",
"cite_spans": [
{
"start": 532,
"end": 554,
"text": "(Neumann et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "Architecture ELECTRA (Clark et al., 2020) pretraining architecture consists of a Generator and a Discriminator network. Each of them consists of Encoder blocks of the transformer (Vaswani et al., 2017) architecture. The generator size is chosen smaller than the Discriminator to make ELECTRA computationally efficient. The size of the Hidden dimension (H) of the transformer encoder in Generator is reduced to 1/3 the size of the Discriminator. The Generator and Discriminator share the weights of the Embedding layer, which is composed of token embeddings, position embeddings and type embeddings. An embedding projector is added to Generator after the Embedding layer to project the embedding dimension H to H/3. Figure 1 shows pretraining configuration of ELECTRA-Base model. The Generator is trained with maximum likelihood as in ELECTRA paper and Generator is not given a noise input vector as in General Adversarial Networks (GANs). The Discriminator is trained very similar to a classifier with cross entropy loss. After pretraining only the Discriminator is used for all the finetuning.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 179,
"end": 201,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 715,
"end": 724,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "3.3"
},
{
"text": "Input/Output representations ELECTRA follows the Input/Output representations of BERT (Devlin et al., 2018) . The first token is always the [CLS] token whose final hidden state is used for finetuning sentence level tasks. For single sentence tasks, the tokenized input sequence should follow the [CLS] token and end with [SEP] . For sentence pair tasks, the tokenized input sentences should be separated by a [SEP] token. Type and Position embeddings which indicate the sentence that it belongs to (sentence1/sentence2) are added to the input token embeddings. Final input representation of a given token is the summation of its token, position and type embeddings which are learnt during the training.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 321,
"end": 326,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "3.3"
},
{
"text": "Pretraining Task ELECTRA introduces replaced token prediction pretraining task where the model is trained to distinguish real input tokens from synthetically generated tokens. Random words are selected in the input text and replaced with tokens generated by a small Generator network. The Discriminator network then predicts whether the input token is original or replaced. This novel approach ensures that the model learns from all the input tokens and not just from 15% of the (Smith et al., 2008) NER 15197 3061 6325 F1 entity-level JNLPBA (Collier and Kim, 2004) NER 46750 4551 8662 F1 entity-level ShARe/CLEFE* (Suominen et al., 2013 (Wang et al., 2020) tokens in the input text as in BERT. This makes the pretraining task computationally effective. As recent work It is pretrained further with PubMed abstracts for 100k, 200k and 400k steps. We publish our results of BioELECTRA(W+F) pretrained with 200k steps as these results were comparable with Pub-MedBERT BLURB (Gu et al., 2020) score.",
"cite_spans": [
{
"start": 479,
"end": 499,
"text": "(Smith et al., 2008)",
"ref_id": "BIBREF36"
},
{
"start": 543,
"end": 566,
"text": "(Collier and Kim, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 616,
"end": 638,
"text": "(Suominen et al., 2013",
"ref_id": "BIBREF38"
},
{
"start": 639,
"end": 658,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF42"
},
{
"start": 973,
"end": 990,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "3.3"
},
{
"text": "SciBERT shows that models trained on uncased vocabularies perform slightly better than the cased models in biomedical domain even for NER tasks. Hence we use the uncased biomedical domain-specific vocabularies from PubMedBERT for all our experiments. The optimization techniques and parameters from ELECTRA paper are followed. All our models are trained on Tensor Processing Unit(TPU) v3-8 instances. Refer Appendix A for complete model and optimizer details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "3.3"
},
{
"text": "We finetune our ELECTRA-Base models on 17 NLP datasets -13 biomedical datasets from the BLURB (Gu et al., 2020) benchmark and 4 clinical datasets from the BLUE (Peng et al., 2019) benchmark. We group our datasets based on the NLP tasks. We do not discuss the datasets in detail due to space constraints. Details on train, dev, test split, benchmark they belong to, evaluation metric used can be found in Table 1 . Detailed description of the datasets are available in the BLURB (Gu et al., 2020) and BLUE (Peng et al., 2019) paper.",
"cite_spans": [
{
"start": 94,
"end": 111,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 160,
"end": 179,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 478,
"end": 495,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 505,
"end": 524,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "NER task aims at recognizing and predicting the entities e.g (chemicals, diseases, genes, proteins) in the given text. We use BC5-Chemical, BC5-Disease, NCBI-Disease, BC2GM, JNLPBA biomedical datasets from the BLURB benchmark. These datasets have the same train, dev and test split as released by (Crichton et al., 2017) . In addition to these, ShARe/CLEFE clinical dataset used by BLUE benchmark which uses the train, dev and test split released by (Suominen et al., 2013 ) is used for NER task.",
"cite_spans": [
{
"start": 297,
"end": 320,
"text": "(Crichton et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 450,
"end": 472,
"text": "(Suominen et al., 2013",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition (NER)",
"sec_num": "4.2.1"
},
{
"text": "PICO task is very similar to NER, where the model aims to predict the Participants, Interventions, Comparisons and Outcomes entities in the given text. EBM PICO (Nye et al., 2020) dataset from the BLURB benchmark which has the same train, test and dev split as the original dataset is used for this task.",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "(Nye et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PICO extraction (PICO)",
"sec_num": "4.2.2"
},
{
"text": "Relation Extraction task predicts relations and their types between the two entities mentioned in the given sentences (e.g, gene-disease relations, protein-chemical relations). We use DDI, ChemProt and GAD datasets from the BLURB benchmark and i2b2-2010 clinical dataset in the BLUE benchmark. GAD dataset in BLURB benchmark uses train, dev and test split created by (Lee et al., 2020) . For DDI, BLURB uses the original dataset by (Herrero-Zazo et al., 2013) and release their own train, dev and test datasets. BLURB uses the train, dev and test split from the original dataset (Krallinger et al., 2017) for ChemProt. BLUE uses the train, dev and test split released by (Uzuner et al., 2011) ",
"cite_spans": [
{
"start": 367,
"end": 385,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 432,
"end": 459,
"text": "(Herrero-Zazo et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 579,
"end": 604,
"text": "(Krallinger et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 671,
"end": 692,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction (RE)",
"sec_num": "4.2.3"
},
{
"text": "Sentence Similarity task predicts the similarity score based on how similar are the given pair of sentences. BIOSSES dataset from BLURB benchmark and ClinicalSTS dataset instead of the Med-STS dataset is chosen from BLUE benchmark. BLURB uses the train, dev and split created by (Peng et al., 2019) . ClinicalSTS dataset is chosen as that is the latest version provided by n2c2 2019 challenge (Wang et al., 2020) . It has added 574 more samples for training and a new test set of 412 samples. As this dataset doesn't have a public train and dev split, we have split it into 80% train and 20% dev set and we use the original test set for evaluation.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 393,
"end": 412,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Similarity",
"sec_num": "4.2.4"
},
{
"text": "Document classification task aims to predict the multiple labels for the given text. Evaluation for Document classification task is done at the document level where we aggregate the labels over all the sentences in a document. We use HoC dataset from BLURB benchmark which uses the original dataset by (Baker et al., 2015) to create their own train, dev and test split.",
"cite_spans": [
{
"start": 302,
"end": 322,
"text": "(Baker et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document classification",
"sec_num": "4.2.5"
},
{
"text": "Natural Language Inference task predicts whether the relation between two sentences are entailment, contradiction or neutrality. MedNLI (Romanov and Shivade, 2018) dataset from the BLUE benchmark which uses the original train, dev and test split is used for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference (NLI)",
"sec_num": "4.2.6"
},
{
"text": "Question Answering task aims to predict the answers in the context when a question text is given as the first sentence. The answers are either twoway (yes/ no) or three-way (yes/ maybe/ no). Pub-MedQA and BioASQ datasets from BLURB benchmark are used for our experiments. For both Pub-MedQA (Jin et al., 2019) and BioASQ (Nentidis et al., 2019) , BLURB uses the original train, dev and test split.",
"cite_spans": [
{
"start": 291,
"end": 309,
"text": "(Jin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 321,
"end": 344,
"text": "(Nentidis et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering (QA)",
"sec_num": "4.2.7"
},
{
"text": "ELECTRA (Clark et al., 2020) applies very minimal architectural changes for finetuning downstream tasks. We follow the same approach as ELECTRA for finetuning BioELECTRA on the various downstream tasks. BIO encoding scheme is adopted for the NER tasks where B stands for Beginning, I stands for Inside and O stands for Outside. All the NER datasets in BLURB benchmark and ShARe/CLEFE in BLUE benchmark have (Clark et al., 2020) uses the vector representation of the [CLS] token to generate the output for all the given NLP tasks except NER and PICO. For NER and PICO, representations for each token is used to classify the entities. A simple linear layer is added to the output of ELECTRA for finetuning. ELECTRA does not use LSTM (Hochreiter and Schmidhuber, 1997) , CRF (Lafferty et al., 2001 ) layers for NER tasks. Figure 2 in appendix B illustrates the finetuning architecture for the NLP tasks. Mean-square error is used for regression tasks and cross entropy loss is used for classification tasks. Similar to BERT finetuning, all the layers are fine-tuned together along with task specific prediction layer. We use 'discriminative finetuning' similar to ELECTRA, where only the final layer is trained with the original learning rate and all other layers use a learning rate with a decay factor. For finetuning, Adam (Kingma and Ba, 2017) optimizer with a slanted triangular learning rate scheduler which linearly warms up (10% of steps) followed by linear decay (90% of steps) is used. We also use a dropout probability of 10%. We experiment with the following hyper parameters: learning rate [3e-5, 5e-5, 1e-4, 1.5e-4, 2e-4], batch size [16, 32] , layer-wise learning-rate decay out of [0.9, 0.8, 0.7] and epochs [3, 5] . BIOSSES (Soganc\u0131oglu et al., 2017) , PubMedQA (Jin et al., 2019) , BioASQ (Nentidis et al., 2019) and Clini-calSTS (Wang et al., 2020) are finetuned for longer epochs. For more details on the hyper parameters, refer Appendix B. We ran 10 fine tuning runs on BIOSSES, BioASQ and PubMedQA since the datasets are relatively smaller and 5 runs on all the other datasets. The average score is reported as the final score for the evaluation metric. ",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 407,
"end": 427,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 466,
"end": 471,
"text": "[CLS]",
"ref_id": null
},
{
"start": 731,
"end": 765,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF12"
},
{
"start": 772,
"end": 794,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF18"
},
{
"start": 1645,
"end": 1649,
"text": "[16,",
"ref_id": null
},
{
"start": 1650,
"end": 1653,
"text": "32]",
"ref_id": null
},
{
"start": 1721,
"end": 1724,
"text": "[3,",
"ref_id": null
},
{
"start": 1725,
"end": 1727,
"text": "5]",
"ref_id": null
},
{
"start": 1738,
"end": 1764,
"text": "(Soganc\u0131oglu et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 1776,
"end": 1794,
"text": "(Jin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1804,
"end": 1827,
"text": "(Nentidis et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 1845,
"end": 1864,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 819,
"end": 827,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Fine tuning",
"sec_num": "4.3"
},
{
"text": "We finetune all of the four BioELECTRA models mentioned in 4.1 for seven biomedical text mining tasks (NER, PICO, Relation Extraction, Sentence Similarity, Document Classification, Question Answering and Natural Language Inference) that are part of the BLURB (Gu et al., 2020) and BLUE (Peng et al., 2019) benchmark. BLURB benchmark Out of the four BioELEC-TRA models, BioELECTRA (P) model pretrained from scratch on PubMed abstracts alone along with biomedical domain specific vocabulary (from Pub-MedBERT (Gu et al., 2020) ) achieves new Stateof-the-Art (SOTA) results on all of the datasets in BLURB benchmark. Our results on BioELECTRA (P) along with the scores for BioBERT (Lee et al., 2020) , SciBERT , Clinical-BERT (Alsentzer et al., 2019) , BlueBERT (Peng et al., 2019) and PubMedBERT (Gu et al., 2020) for all the tasks in the BLURB benchmark are shown in table 2. The scores on these datasets for all these models are taken from the BLURB benchmark. As we do not have details on train, test and dev split of datasets used by Bio-lm (Lewis et al., 2020 ) paper, we are not able to compare our results with their results. For NCBI-Disease, where the train, test and dev split is publicly available, our model (89.38%) performs better than the Biolm Base (PM + Voc) model (88.2%). ELECTRA performs significantly better than all other BERT based models on the SQuAD (Rajpurkar et al., 2016b) benchmark in the general domain. Similarly, BioELECTRA (P) model has significantly higher scores on the Question Answering tasks. It achieves new SOTA of 64.02% (3.78% increase over the previous SOTA) on PubMedQA and with a new SOTA of 88.57% (1.01 % increase over the previous SOTA) on BioASQ. Our overall BLURB score (macro average of the average metric for each of the six tasks) is 82.40% which is 1.3% higher than PubMedBERT BLURB score of 81.10%.",
"cite_spans": [
{
"start": 259,
"end": 276,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 286,
"end": 305,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 507,
"end": 524,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 678,
"end": 696,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 723,
"end": 747,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 759,
"end": 778,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 794,
"end": 811,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 1043,
"end": 1062,
"text": "(Lewis et al., 2020",
"ref_id": "BIBREF21"
},
{
"start": 1373,
"end": 1398,
"text": "(Rajpurkar et al., 2016b)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We present results of Bio-ELECTRA (P) pretrained on PubMed abstracts alone and BioELECTRA (P+F) pretrained on both PubMed abstracts and PubMed full text articles on four of the clinical datasets in the BLUE benchmark in table3. We compare the performance of our models with the results of BioBERT, ClinicalBERT, BlueBERT and PubMedBERT. Since the scores on the train, dev and test split of these clinical datasets by BioBERT, ClinicalBERT, BlueBERT and Pub-MedBERT are not available, we used their pretrained weights on these datasets and documented the results. We do not have the results of SciBERT model as it was trained on mixed domain data. Out of the four datasets in the BLUE benchmark, we have results of Biolm for i2b2-2010 and MedNLI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLUE benchmark",
"sec_num": null
},
{
"text": "Since we do not have the train, dev and test split used by Biolm for i2b2-2010, we compare our results only for the MedNLI dataset. Score of our BioELECTRA (P+F) model 86.34% is significantly higher than Biolm Base model (PM + Voc) score of 83.2%. We also note that BioELECTRA performs better than BERT based models trained on MIMIC data. BioELECTRA (P) achieves new SOTA on three of the datasets -i2b2-2010, ShARe/CLEFE and ClinicalSTS. BioELECTRA (P+F)'s score of 86.34% on MedNLI task is marginally (0.07%) higher than the score of BioELECTRA (P)'s score of 86.27% and this is the new SOTA for MedNLI dataset for models trained on PubMed abstracts and PubMed Central full text articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLUE benchmark",
"sec_num": null
},
{
"text": "Our models pretrained on domain specific text along with domain specific vocabulary have consistently shown that the pretraining from scratch with domain specific data enables the model to capture the contextual representations of the language better. Comparison of BioELECTRA models Table 4 shows the comparison of results of our models BioELECTRA(P), BioELECTRA (P+F) and Bio-ELECTRA (P+F) LT with longer training of additional 1 million steps and BioELECTRA (W+P). BioELECTRA (W+P) is pretrained from scratch on Wikipedia and PubMed abstracts along with a general domain vocabulary (BERT (Devlin et al., 2018) uncased vocabulary). We observe that Bio-ELECTRA (P+F) LT with longer training of 2 million steps does not give substantial improvements on all of the tasks. BioELECTRA (P+F) LT model's result is slightly better than BioELECTRA (P) on BC5-chem dataset. BioELECTRA (P+F) LT model's result on GAD and BioASQ datasets are marginally better than BioELECTRA (P+F). BioELECTRA (P+F) performs slightly better than BioELECTRA (P) on DDI and BIOSSES datasets.",
"cite_spans": [
{
"start": 591,
"end": 612,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 4",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "BLUE benchmark",
"sec_num": null
},
{
"text": "The results clearly show that all BioELECTRA models pretrained from scratch with biomedical domain text and domain specific vocabulary perform better than the model pretrained on both general and biomedical domain text with general domain vocabulary. However it is interesting to note that BioELECTRA (W+P) model has significantly better results for i2b2-2010, ShARe/CLEFE and Clin-icalSTS datasets than PubMedBERT. BioELEC-TRA (W+P)'s score for MedNLI is comparable to that of PubMedBERT (Gu et al., 2020) .",
"cite_spans": [
{
"start": 489,
"end": 506,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BLUE benchmark",
"sec_num": null
},
{
"text": "We release BioELECTRA-base models pretrained from scratch on biomedical domain specific text and evaluate the performance on seven different biomedical NLP tasks with 17 datasets. We achieve SOTA on all the datasets in the BLURB (Gu et al., 2020) benchmark and all four clinical datasets in the BLUE (Peng et al., 2019) benchmark. Our results show that pretraining from scratch with biomedical domain text helps the model to learn better contextual representations. We release the pretrained weights for all our models and the code for reproducibility. els on MIMIC III (Johnson et al., 2016) clinical notes and evaluate the performance of the models on biomedical NLP tasks. As ELECTRA shows a significant improvement on SQuAD (Rajpurkar et al., 2016b) , we want to focus on Biomedical QA tasks (span prediction) and evaluate domain specific pretrained ELECTRA models performance. 'Discriminative finetuning' is adopted where the learning rate varies across the layers. The learning rate decays across the layers from top to bottom with a factor of 0.8 for all the NLP tasks. The colour gradient in figure 2 represents this . For a learning rate of 1e-4 , only the task specific prediction layer (final layer) is finetuned at this rate. With a decay factor of 0.8, the embedding layer for that particular task is finetuned at a learning rate of 5.5e-6. Table 6 shows the common hyperparameters used across tasks, and table 7 shows task specific hyperparameters.",
"cite_spans": [
{
"start": 229,
"end": 246,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 300,
"end": 319,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 728,
"end": 753,
"text": "(Rajpurkar et al., 2016b)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 1354,
"end": 1361,
"text": "Table 6",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "The code and models are available at https://github.com/kamalkraj/BioELECTRA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ncbi.nlm.nih.gov/pmc/ 3 https://github.com/titipata/pubmed_parser",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "BioBERT was trained with a batch size of 256 with 1M steps in pretraining and 1M steps in continual pretraining.5 PubMedBERT was trained with a batch size of 8,192 for 62,500 steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cloud.google.com/blog/products/ai-machinelearning/bfloat16-the-secret-to-high-performance-on-cloudtpus 7 https://github.com/google-research/electra",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We plan to explore and experiment with our domain specific pretraining approach on ELECTRA-LARGE models.We also intend to train ELECTRA-BASE and ELECTRA-LARGE mod-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "This research was supported by Google's Tensor-Flow Research Cloud (TFRC). We also extend our thanks to Samuel Gurudas for his assistance in creating the diagrams in this research paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "All the BioELECTRA models are trained on TPU v3-8 instances. Adopting bfloat16 6 training helped us in improving the training speed. Very similar to BERT, we train the model in 2 phases, 90% of steps with sequence length of 128 (phase1) and 10% of steps with sequence length of 512 (phase2) to learn the positional embeddings. Model training reached 1M steps in 5 days (phase1 -4 days and phase2 -1day). For pretraining, we use the original ELECTRA code 7 released by authors. Refer table 5 for details regarding all the parameters. Figure 2 shows different architecture schema of different models.",
"cite_spans": [],
"ref_spans": [
{
"start": 533,
"end": 541,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Pretraining",
"sec_num": null
},
{
"text": "\u2022 Single Sentence Classification : ChemProt, DDI, GAD, i2b2-2010, HoC\u2022 Entity Classification: BC5-chem, BC5disease, NCBI-Disease, BC2GM, JNLPBA, ShARe/CLEFE, EBM PICO",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Finetuning",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Construction of the literature graph in semantic scholar",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Groeneveld",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Crawford",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Dunkelberger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Vu",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Kinney",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Kohlmeier",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu-Han",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Ooi",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Power",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Skjonsberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Wilhelm",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Van Zuylen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "3",
"issue": "",
"pages": "84--91",
"other_ids": {
"DOI": [
"10.18653/v1/N18-3011"
]
},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, Dirk Groeneveld, Chandra Bhagavat- ula, Iz Beltagy, Miles Crawford, Doug Downey, Ja- son Dunkelberger, Ahmed Elgohary, Sergey Feld- man, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Pe- ters, Joanna Power, Sam Skjonsberg, Lucy Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the litera- ture graph in semantic scholar. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 84-91, New Orleans -Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic semantic classification of scientific literature according to the hallmarks of cancer",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Ilona",
"middle": [],
"last": "Silins",
"suffix": ""
},
{
"first": "Yufan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Imran",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "H\u00f6gberg",
"suffix": ""
},
{
"first": "Ulla",
"middle": [],
"last": "Stenius",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Bioinformatics",
"volume": "32",
"issue": "3",
"pages": "432--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Baker, Ilona Silins, Yufan Guo, Imran Ali, Jo- han H\u00f6gberg, Ulla Stenius, and Anna Korhonen. 2015. Automatic semantic classification of scien- tific literature according to the hallmarks of cancer. Bioinformatics, 32(3):432-440.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SciB-ERT: A pretrained language model for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3615--3620",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1371"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research",
"authors": [
{
"first": "\u00c0lex",
"middle": [],
"last": "Bravo",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Pi\u00f1ero",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Queralt-Rosinach",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rautschka",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"I"
],
"last": "Furlong",
"suffix": ""
}
],
"year": 2015,
"venue": "BMC bioinformatics",
"volume": "16",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c0lex Bravo, Janet Pi\u00f1ero, N\u00faria Queralt-Rosinach, Michael Rautschka, and Laura I Furlong. 2015. Ex- traction of relations between genes and diseases from text and large-scale data analysis: implica- tions for translational research. BMC bioinformat- ics, 16(1):55.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Introduction to the bio-entity recognition task at JNLPBA",
"authors": [
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)",
"volume": "",
"issue": "",
"pages": "73--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nigel Collier and Jin-Dong Kim. 2004. Introduc- tion to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A neural network multi-task learning approach to biomedical named entity recognition",
"authors": [
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "BMC Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s12859-017-1776-8"
]
},
"num": null,
"urls": [],
"raw_text": "Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learn- ing approach to biomedical named entity recogni- tion. BMC Bioinformatics, 18.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ncbi disease corpus: a resource for disease name recognition and concept normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of biomedical informatics",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedical natural language processing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions",
"authors": [
{
"first": "Mar\u00eda",
"middle": [],
"last": "Herrero-Zazo",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Segura-Bedmar",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Declerck",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of biomedical informatics",
"volume": "46",
"issue": "5",
"pages": "914--920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mar\u00eda Herrero-Zazo, Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Thierry Declerck. 2013. The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Journal of biomedical informatics, 46(5):914-920.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "PubMedQA: A dataset for biomedical research question answering",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Zhengping",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Xinghua",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2567--2577",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1259"
]
},
"num": null,
"urls": [],
"raw_text": "Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2567- 2577, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mimic-iii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liwei",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, Li- wei H Lehman, Mengling Feng, Mohammad Ghas- semi, Benjamin Moody, Peter Szolovits, Leo An- thony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3:160035.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Overview of the biocreative vi chemical-protein interaction track",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Mart\u0131n",
"middle": [],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "P\u00e9rez P\u00e9rez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santamar\u00eda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gp Rodr\u00edguez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the sixth BioCreative challenge evaluation workshop",
"volume": "1",
"issue": "",
"pages": "141--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Saber A Akhondi, Mart\u0131n P\u00e9rez P\u00e9rez, Jes\u00fas Santamar\u00eda, GP Ro- dr\u00edguez, et al. 2017. Overview of the biocreative vi chemical-protein interaction track. In Proceedings of the sixth BioCreative challenge evaluation work- shop, volume 1, pages 141-146.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Pretrained language models for biomedical and clinical tasks: Understanding and extending the state-of-the-art",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "146--157",
"other_ids": {
"DOI": [
"10.18653/v1/2020.clinicalnlp-1.17"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Myle Ott, Jingfei Du, and Veselin Stoy- anov. 2020. Pretrained language models for biomed- ical and clinical tasks: Understanding and extend- ing the state-of-the-art. In Proceedings of the 3rd Clinical Natural Language Processing Workshop, pages 146-157, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robin",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Leaman",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Davis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Results of the seventh edition of the bioasq challenge",
"authors": [
{
"first": "Anastasios",
"middle": [],
"last": "Nentidis",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Bougiatiotis",
"suffix": ""
}
],
"year": 2019,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "553--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastasios Nentidis, Konstantinos Bougiatiotis, Anas- tasia Krithara, and Georgios Paliouras. 2019. Re- sults of the seventh edition of the bioasq challenge. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 553- 568. Springer.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "ScispaCy: Fast and robust models for biomedical natural language processing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "319--327",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5034"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Pro- ceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Nye",
"suffix": ""
},
{
"first": "Junyi",
"middle": [
"Jessy"
],
"last": "Li",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Iain",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Marshall",
"suffix": ""
},
{
"first": "Byron C",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the conference. Association for Computational Linguistics. Meeting",
"volume": "2018",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain J Marshall, Ani Nenkova, and Byron C Wallace. 2018. A corpus with multi-level annota- tions of patients, interventions and outcomes to sup- port language processing for medical literature. In Proceedings of the conference. Association for Com- putational Linguistics. Meeting, volume 2018, page 197. NIH Public Access.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Trialstreamer: Mapping and browsing medical evidence in real-time",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Nye",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Marshall",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "63--69",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.9"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Nye, Ani Nenkova, Iain Marshall, and By- ron C. Wallace. 2020. Trialstreamer: Mapping and browsing medical evidence in real-time. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstra- tions, pages 63-69, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5006"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58- 65, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016a. SQuAD: 100,000+ questions for machine comprehension of text. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016b. Squad: 100,000+ questions for machine comprehension of text.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Lessons from natural language inference in the clinical domain",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Shivade",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clin- ical domain.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Overview of biocreative ii gene mention recognition",
"authors": [
{
"first": "Larry",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lorraine",
"suffix": ""
},
{
"first": "Rie",
"middle": [],
"last": "Tanabe",
"suffix": ""
},
{
"first": "Cheng-Ju",
"middle": [],
"last": "Johnson Nee Ando",
"suffix": ""
},
{
"first": "I-Fang",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Chun-Nan",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Yu-Shi",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Christoph",
"middle": [
"M"
],
"last": "Klinger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedrich",
"suffix": ""
}
],
"year": 2008,
"venue": "Genome biology",
"volume": "9",
"issue": "S2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Larry Smith, Lorraine K Tanabe, Rie Johnson nee Ando, Cheng-Ju Kuo, I-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M Friedrich, et al. 2008. Overview of biocreative ii gene mention recognition. Genome biology, 9(S2):S2.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Biosses: a semantic sentence similarity estimation system for the biomedical domain",
"authors": [
{
"first": "Gizem",
"middle": [],
"last": "Soganc\u0131oglu",
"suffix": ""
},
{
"first": "Hakime",
"middle": [],
"last": "\u00d6zt\u00fcrk",
"suffix": ""
},
{
"first": "Arzucan",
"middle": [],
"last": "\u00d6zg\u00fcr",
"suffix": ""
}
],
"year": 2017,
"venue": "Bioinformatics",
"volume": "33",
"issue": "14",
"pages": "49--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gizem Soganc\u0131oglu, Hakime \u00d6zt\u00fcrk, and Arzucan \u00d6zg\u00fcr. 2017. Biosses: a semantic sentence simi- larity estimation system for the biomedical domain. Bioinformatics, 33(14):i49-i58.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Overview of the share/clef ehealth evaluation lab 2013",
"authors": [
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": ""
},
{
"first": "Sanna",
"middle": [],
"last": "Salanter\u00e4",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Brett",
"middle": [
"R"
],
"last": "South",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"L"
],
"last": "Mowery",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Liadh",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Lorraine",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Goeuriot",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zuccon",
"suffix": ""
}
],
"year": 2013,
"venue": "Multimodality, and Visualization",
"volume": "",
"issue": "",
"pages": "212--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna Suominen, Sanna Salanter\u00e4, Sumithra Velupil- lai, Wendy W. Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R. South, Danielle L. Mowery, Gareth J. F. Jones, Johannes Leveling, Liadh Kelly, Lorraine Goeuriot, David Martinez, and Guido Zuccon. 2013. Overview of the share/clef ehealth evaluation lab 2013. In In- formation Access Evaluation. Multilinguality, Multi- modality, and Visualization, pages 212-231, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "i2b2/VA challenge on concepts, assertions, and relations in clinical text",
"authors": [
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Scott L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvall",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "552--556",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2011-000203"
]
},
"num": null,
"urls": [],
"raw_text": "\u00d6zlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation, 18(5):552-556.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3266--3280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, pages 3266-3280.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The 2019 n2c2/OHNLP Track on Clinical Semantic Textual Similarity: Overview",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "JMIR Med Inform",
"volume": "8",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Wang, S. Fu, F. Shen, S. Henry, O. Uzuner, and H. Liu. 2020. The 2019 n2c2/OHNLP Track on Clinical Semantic Textual Similarity: Overview. JMIR Med Inform, 8(11):e23375.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2020. Xlnet: Generalized autoregressive pretraining for language understanding.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Overview of ELECTRA-Base model Pretraining. Output shapes are mentioned in parenthesis after each block.( B=Batch Size, MSL=Maximum Sequence Length, H=Hidden size )",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Overview of BioELECTRA model finetuning.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF3": {
"text": "Datasets from BLURB and BLUE benchmark. Number of instances in train, dev, and test set along with the evaluation metrics used for each of the datasets is listed.",
"num": null,
"type_str": "table",
"html": null,
"content": "
"
},
"TABREF6": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": ""
},
"TABREF8": {
"text": "Comparison of pretrained language models on the BLUE(Peng et al.",
"num": null,
"type_str": "table",
"html": null,
"content": ", 2019) benchmark. (P -PubMed |
"
},
"TABREF10": {
"text": "Comparison of BioELECTRA models on BLURB(Gu et al., 2020) and BLUE(Peng et al., 2019) benchmark.",
"num": null,
"type_str": "table",
"html": null,
"content": ""
},
"TABREF12": {
"text": "Common hyperparamters across tasks",
"num": null,
"type_str": "table",
"html": null,
"content": "\u2022 Sentence Pair Classification: BIOSSES, Clin- |
icalSTS |
\u2022 Question Answering: PubMedQA, BioASQ |
"
},
"TABREF14": {
"text": "LR : Learning Rate, BS : Batch Size, MSL : Maximum Sequence Length",
"num": null,
"type_str": "table",
"html": null,
"content": ""
}
}
}
}