roberttinn commited on
Commit
55449f5
1 Parent(s): b6e47f3

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +31 -0
  2. config.json +13 -0
  3. pytorch_model.bin +3 -0
  4. tokenizer_config.json +3 -0
  5. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - exbert
5
+ license: mit
6
+ widget:
7
+ - text: "[MASK] is a tyrosine kinase inhibitor."
8
+ ---
9
+
10
+ ## PubMedELECTRA-base (abstracts only)
11
+
12
+ Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.
13
+
14
+ This PubMedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/). This model achieves state-of-the-art performance on several biomedical NLP tasks, as shown on the [Biomedical Language Understanding and Reasoning Benchmark](https://aka.ms/BLURB).
15
+
16
+ ## Citation
17
+
18
+ If you find PubMedBERT useful in your research, please cite the following paper:
19
+
20
+ ```latex
21
+ @misc{pubmedbert,
22
+ author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon},
23
+ title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing},
24
+ year = {2020},
25
+ eprint = {arXiv:2007.15779},
26
+ }
27
+ ```
28
+
29
+ <a href="https://huggingface.co/exbert/?model=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=10&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
30
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
31
+ </a>
config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "hidden_act": "gelu",
4
+ "hidden_dropout_prob": 0.1,
5
+ "hidden_size": 768,
6
+ "initializer_range": 0.02,
7
+ "intermediate_size": 3072,
8
+ "max_position_embeddings": 512,
9
+ "num_attention_heads": 12,
10
+ "num_hidden_layers": 12,
11
+ "type_vocab_size": 2,
12
+ "vocab_size": 30522
13
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:945b5f81b23dac9297dd943e430ced31a189bb6e531cfaaa084b3cb7ac7c312d
3
+ size 440509869
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "do_lower_case": true
3
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff