aidenygu commited on
Commit
e0f5327
1 Parent(s): e241513

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -12,8 +12,7 @@ widget:
12
  <div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;">
13
 
14
  * NOTE: This model was previously named **"PubMedBERT (abstracts + full text)"**.<br>
15
- * In code using the previous name, please update to `transformers>=4.22` for compatibility.
16
-
17
  </div>
18
 
19
  Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.
 
12
  <div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;">
13
 
14
  * NOTE: This model was previously named **"PubMedBERT (abstracts + full text)"**.<br>
15
+ * If your code references the old name, you can either update the model name to **"microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext"** or retain the old name, but ensure you are using transformers version **"4.22"** or later for compatibility.
 
16
  </div>
17
 
18
  Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.