Li commited on
Commit
6786873
1 Parent(s): 7e19caa

add YAML metadata and and contact information.

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -1,6 +1,13 @@
 
 
 
 
 
 
 
1
  Bioformer is a lightweight BERT model for biomedical text mining. Bioformer uses a biomedical vocabulary and is pre-trained from scratch only on biomedical domain corpora. Our experiments show that Bioformer is 3x as fast as BERT-base, and achieves comparable or even better performance than BioBERT/PubMedBERT on downstream NLP tasks.
2
 
3
- Bioformer has 8 layers (transformer blocks) with a hidden embedding size of 512, and the number of self-attention heads is 8. and its total number of parameters is 42,820,610.
4
 
5
  ## Vocabulary of Bioformer
6
  Bioformer uses a cased WordPiece vocabulary trained from a biomedical corpus, which included all PubMed abstracts (33 million, as of Feb 1, 2021) and 1 million PMC full-text articles. PMC has 3.6 million articles but we down-sampled them to 1 million such that the total size of PubMed abstracts and PMC full-text articles are approximately equal. To mitigate the out-of-vocabulary issue and include special symbols (e.g. male and female symbols) in biomedical literature, we trained Bioformer’s vocabulary from the Unicode text of the two resources. The vocabulary size of Bioformer is 32768 (2^15), which is similar to that of the original BERT.
@@ -10,3 +17,11 @@ Bioformer was pre-trained from scratch on the same corpus as the vocabulary (33
10
 
11
  Pre-training of Bioformer was performed on a single Cloud TPU device (TPUv2, 8 cores, 8GB memory per core). The maximum input sequence length was fixed to 512, and the batch size was set to 256. We pre-trained Bioformer for 2 million steps, which took about 8.3 days.
12
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ ---
6
+
7
+
8
  Bioformer is a lightweight BERT model for biomedical text mining. Bioformer uses a biomedical vocabulary and is pre-trained from scratch only on biomedical domain corpora. Our experiments show that Bioformer is 3x as fast as BERT-base, and achieves comparable or even better performance than BioBERT/PubMedBERT on downstream NLP tasks.
9
 
10
+ Bioformer has 8 layers (transformer blocks) with a hidden embedding size of 512, and the number of self-attention heads is 8. Its total number of parameters is 42,820,610.
11
 
12
  ## Vocabulary of Bioformer
13
  Bioformer uses a cased WordPiece vocabulary trained from a biomedical corpus, which included all PubMed abstracts (33 million, as of Feb 1, 2021) and 1 million PMC full-text articles. PMC has 3.6 million articles but we down-sampled them to 1 million such that the total size of PubMed abstracts and PMC full-text articles are approximately equal. To mitigate the out-of-vocabulary issue and include special symbols (e.g. male and female symbols) in biomedical literature, we trained Bioformer’s vocabulary from the Unicode text of the two resources. The vocabulary size of Bioformer is 32768 (2^15), which is similar to that of the original BERT.
 
17
 
18
  Pre-training of Bioformer was performed on a single Cloud TPU device (TPUv2, 8 cores, 8GB memory per core). The maximum input sequence length was fixed to 512, and the batch size was set to 256. We pre-trained Bioformer for 2 million steps, which took about 8.3 days.
19
 
20
+ ## Acknowledgment
21
+
22
+ Bioformer is partly supported by the Google TPU Research Cloud (TRC) program.
23
+
24
+ ## Questions
25
+ If you have any questions, please submit an issue here: https://github.com/WGLab/bioformer/issues
26
+
27
+ You can also send an email to Li Fang (fangli2718@gmail.com)