ekurtic commited on
Commit
12b51b6
1 Parent(s): 7e896cb

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-3-upstream-pretrained-dense
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+ It corresponds to 3 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 3 Layers`.
6
+ The model can also be used for finetuning on any downstream task, as a starting point instead of the three times larger `bert-base-uncased` model.
7
+
8
+ Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
9
+ - 0%: `neuralmagic/oBERT-3-downstream-dense-squadv1`
10
+ - 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
11
+ - 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
12
+ - 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
13
+ - 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
14
+
15
+ ```
16
+ Training objective: masked language modeling (MLM) + knowledge distillation
17
+ Paper: https://arxiv.org/abs/2203.07259
18
+ Dataset: BookCorpus and English Wikipedia
19
+ Sparsity: 0%
20
+ Number of layers: 3
21
+ ```
22
+
23
+ Code: _coming soon_
24
+
25
+ ## BibTeX entry and citation info
26
+ ```bibtex
27
+ @article{kurtic2022optimal,
28
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
29
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
30
+ journal={arXiv preprint arXiv:2203.07259},
31
+ year={2022}
32
+ }
33
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4399499884eb31755f73d19f51b2cee5cee1f1d778303c5d00b2beb4d441488a
3
+ size 296
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cf27a4cac46a5d951f9cd136095181f552ea0a2f7e3c853d25482342dce44b7
3
+ size 649
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0ca37ec11634693dd2f3aa2ea054ed21573dbdb4a784bfb733c1a8cda528cb3
3
+ size 193
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e01d2ffddb1423b00ee388af104bfe005c99b7fc45386a1294155c581ea99505
3
+ size 182922882
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c0ac5b4a3f74d359907a3280fe543d0fe980d5b3f33669820566387e4e734bf
3
+ size 377
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:644b2dc854dcae83886f6d9fd25870b43058bf097810f24bfe26eb8496f58753
3
+ size 124
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa09be8a31da1b77b1c6fff5ccea23e787ae52c40f129fad35e385c6419a76b8
3
+ size 94091
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd1e41e951c983a3b7beaf824021829c1f6b71f81dbb6cbe71f4d802521d046e
3
+ size 2415
vocab.txt ADDED
The diff for this file is too large to render. See raw diff