initial commit
Browse files- README.md +34 -0
- config.json +34 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.txt +0 -0
README.md
CHANGED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SciBERT Longformer finetuned to SDG classification
|
2 |
+
|
3 |
+
This is a Lonformer version of the [SciBERT uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) model by Allen AI, finetuned to Sustainable Development Goals classification. The model is slower than SciBERT (~2.5x in my benchmarks) but can allow for 8x wider `max_seq_length` (4096 vs. 512) which is handy in case of working with long texts, e.g. scientific full texts.
|
4 |
+
|
5 |
+
The conversion to Longformer was performed with a [tutorial](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) by Allen AI: see a [Google Colab Notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) by [Yury](https://yorko.github.io/) which closely follows the tutorial.
|
6 |
+
|
7 |
+
Note:
|
8 |
+
|
9 |
+
- no additional MLM pretraining of the Longformer was performed, the [collab notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) stops at step 3, and step 4 is not done. The model can be improved with this additional MLM pretraining, better to do so with scientific texts, e.g. [S@ORC](https://github.com/allenai/s2orc), again by Allen AI.
|
10 |
+
- no extensive benchmarks SciBERT Longformer vs. SciBERT were performed in terms of downstream task performance
|
11 |
+
|
12 |
+
Links:
|
13 |
+
- the original [SciBERT repo](https://github.com/allenai/scibert)
|
14 |
+
- the original [Longformer repo](https://github.com/allenai/longformer)
|
15 |
+
|
16 |
+
|
17 |
+
If using these models, please consider citing the following papers:
|
18 |
+
```
|
19 |
+
@inproceedings{beltagy-etal-2019-scibert,
|
20 |
+
title = "SciBERT: A Pretrained Language Model for Scientific Text",
|
21 |
+
author = "Beltagy, Iz and Lo, Kyle and Cohan, Arman",
|
22 |
+
booktitle = "EMNLP",
|
23 |
+
year = "2019",
|
24 |
+
publisher = "Association for Computational Linguistics",
|
25 |
+
url = "https://www.aclweb.org/anthology/D19-1371"
|
26 |
+
}
|
27 |
+
|
28 |
+
@article{Beltagy2020Longformer,
|
29 |
+
title={Longformer: The Long-Document Transformer},
|
30 |
+
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
|
31 |
+
journal={arXiv:2004.05150},
|
32 |
+
year={2020},
|
33 |
+
}
|
34 |
+
```
|
config.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"BertForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"attention_window": [
|
7 |
+
512,
|
8 |
+
512,
|
9 |
+
512,
|
10 |
+
512,
|
11 |
+
512,
|
12 |
+
512,
|
13 |
+
512,
|
14 |
+
512,
|
15 |
+
512,
|
16 |
+
512,
|
17 |
+
512,
|
18 |
+
512
|
19 |
+
],
|
20 |
+
"gradient_checkpointing": false,
|
21 |
+
"hidden_act": "gelu",
|
22 |
+
"hidden_dropout_prob": 0.1,
|
23 |
+
"hidden_size": 768,
|
24 |
+
"initializer_range": 0.02,
|
25 |
+
"intermediate_size": 3072,
|
26 |
+
"layer_norm_eps": 1e-12,
|
27 |
+
"max_position_embeddings": 4096,
|
28 |
+
"model_type": "bert",
|
29 |
+
"num_attention_heads": 12,
|
30 |
+
"num_hidden_layers": 12,
|
31 |
+
"pad_token_id": 0,
|
32 |
+
"type_vocab_size": 2,
|
33 |
+
"vocab_size": 31090
|
34 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48db774448d6c458effaee86de8b6d656b6571f4de8df2f148542a6c7db8b7c7
|
3 |
+
size 450822016
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"model_max_length": 4096, "special_tokens_map_file": null, "full_tokenizer_file": null}
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|