roberttinn commited on
Commit
e9d7ae5
1 Parent(s): e98f50c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ widget:
9
 
10
  ## PubMedBERT-large (abstracts only)
11
 
12
- Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.[Followup work](https://arxiv.org/abs/2112.07869) explores larger model sizes and the impact of these on performance on the BLURB benchmark.
13
 
14
  This PubMedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/).
15
 
 
9
 
10
  ## PubMedBERT-large (abstracts only)
11
 
12
+ Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. [Followup work](https://arxiv.org/abs/2112.07869) explores larger model sizes and the impact of these on performance on the BLURB benchmark.
13
 
14
  This PubMedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/).
15