ekurtic commited on
Commit
54b7c41
1 Parent(s): 18a4512

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-6-upstream-pretrained-dense
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+ It corresponds to 6 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 6 Layers`.
6
+ The model can also be used for finetuning on any downstream task, as a starting point instead of the two times larger `bert-base-uncased` model.
7
+
8
+ Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
9
+ - 0%: `neuralmagic/oBERT-6-downstream-dense-squadv1`
10
+ - 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
11
+ - 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
12
+ - 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
13
+ - 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
14
+
15
+ ```
16
+ Training objective: masked language modeling (MLM) + knowledge distillation
17
+ Paper: https://arxiv.org/abs/2203.07259
18
+ Dataset: BookCorpus and English Wikipedia
19
+ Sparsity: 0%
20
+ Number of layers: 6
21
+ ```
22
+
23
+ Code: _coming soon_
24
+
25
+ ## BibTeX entry and citation info
26
+ ```bibtex
27
+ @article{kurtic2022optimal,
28
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
29
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
30
+ journal={arXiv preprint arXiv:2203.07259},
31
+ year={2022}
32
+ }
33
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64e21b9ce39fb53fc925fd9a843dcc9103dfa5730f810b70d3f6b88c34b03ca7
3
+ size 295
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d16dddc93d41138172020b843d8fa66de40be2781ac29f7239104191fce1a327
3
+ size 656
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d7d95031171fcdf07171ecea75f452123936ac2a06bd3530f4fa5de8411fe34
3
+ size 193
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf23eb560a67335096f44328171518bbee204e8732c0b68b373c6d38dea74f0e
3
+ size 267997618
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9b4477ce4e6905827bf0f322979128c9a12ea6b3c59c013001c55c6427cae9c
3
+ size 384
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9161f0590be9648c028abc9055f094933d9dfc5ce26e7e58c5bcbea9b7df1069
3
+ size 123
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e8e5cec29338d0611aba58a2a8fc580bf8412eaf4a45f184a08a961841c7799
3
+ size 94077
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af167a8c35ccc9716b860a347acbb3be69076d5522bc70679d2eca3ef76066ab
3
+ size 2415
vocab.txt ADDED
The diff for this file is too large to render. See raw diff