Commit
·
afbd33e
1
Parent(s):
3c3bbd4
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,54 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- science
|
5 |
+
- multi-disciplinary
|
6 |
license: apache-2.0
|
7 |
---
|
8 |
+
|
9 |
+
# ScholarBERT_100 Model
|
10 |
+
|
11 |
+
This is the **ScholarBERT_100_64bit** variant of the ScholarBERT model family. The difference between this variant and the **ScholarBERT_100** model is that its tokenizer
|
12 |
+
is trained with `int64` rather than the default `int32`, so the count of very frequent tokens (e.g., "the") does not overflow.
|
13 |
+
|
14 |
+
The model is pretrained on a large collection of scientific research articles (**221B tokens**).
|
15 |
+
|
16 |
+
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
|
17 |
+
|
18 |
+
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
|
19 |
+
|
20 |
+
|
21 |
+
# Model Architecture
|
22 |
+
|
23 |
+
| Hyperparameter | Value |
|
24 |
+
|-----------------|:-------:|
|
25 |
+
| Layers | 24 |
|
26 |
+
| Hidden Size | 1024 |
|
27 |
+
| Attention Heads | 16 |
|
28 |
+
| Total Parameters | 340M |
|
29 |
+
|
30 |
+
|
31 |
+
# Training Dataset
|
32 |
+
|
33 |
+
The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset.
|
34 |
+
|
35 |
+
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
|
36 |
+
a nonprofit organization based in California. This dataset was constructed from a corpus
|
37 |
+
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
|
38 |
+
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
|
39 |
+
Social Sciences, and Technology. The distribution of articles is shown below.
|
40 |
+
|
41 |
+

|
42 |
+
|
43 |
+
|
44 |
+
# BibTeX entry and citation info
|
45 |
+
If using this model, please cite this paper:
|
46 |
+
```
|
47 |
+
@inproceedings{hong2023diminishing,
|
48 |
+
title={The diminishing returns of masked language models to science},
|
49 |
+
author={Hong, Zhi and Ajith, Aswathy and Pauloski, James and Duede, Eamon and Chard, Kyle and Foster, Ian},
|
50 |
+
booktitle={Findings of the Association for Computational Linguistics: ACL 2023},
|
51 |
+
pages={1270--1283},
|
52 |
+
year={2023}
|
53 |
+
}
|
54 |
+
```
|