tnaumann commited on
Commit
609b757
1 Parent(s): 49cf38a

Renames PubMedBERT to BiomedBERT

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -7,15 +7,17 @@ widget:
7
  - text: "[MASK] is a tyrosine kinase inhibitor."
8
  ---
9
 
10
- ## PubMedBERT-large (abstracts only)
 
 
11
 
12
  Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. [Followup work](https://arxiv.org/abs/2112.07869) explores larger model sizes and the impact of these on performance on the BLURB benchmark.
13
 
14
- This PubMedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/).
15
 
16
  ## Citation
17
 
18
- If you find PubMedBERT useful in your research, please cite the following paper:
19
 
20
  ```latex
21
  @misc{https://doi.org/10.48550/arxiv.2112.07869,
 
7
  - text: "[MASK] is a tyrosine kinase inhibitor."
8
  ---
9
 
10
+ ## MSR BiomedBERT-large (abstracts only)
11
+
12
+ *NOTE: This model was previously named "PubMedBERT-large (abstracts only)".*
13
 
14
  Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. [Followup work](https://arxiv.org/abs/2112.07869) explores larger model sizes and the impact of these on performance on the BLURB benchmark.
15
 
16
+ This BiomedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/).
17
 
18
  ## Citation
19
 
20
+ If you find BiomedBERT useful in your research, please cite the following paper:
21
 
22
  ```latex
23
  @misc{https://doi.org/10.48550/arxiv.2112.07869,