ekurtic commited on
Commit
e353a71
1 Parent(s): b6bb50c

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-upstream-pruned-unstructured-90-v2
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 90%` (in the upcoming updated version of the paper).
7
+
8
+ Finetuned versions of this model for each downstream task are:
9
+
10
+ - SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2`
11
+ - MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2`
12
+ - QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2`
13
+
14
+ ```
15
+ Pruning method: oBERT upstream unstructured
16
+ Paper: https://arxiv.org/abs/2203.07259
17
+ Dataset: BookCorpus and English Wikipedia
18
+ Sparsity: 90%
19
+ Number of layers: 12
20
+ ```
21
+
22
+ Code: _coming soon_
23
+
24
+ ## BibTeX entry and citation info
25
+ ```bibtex
26
+ @article{kurtic2022optimal,
27
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
28
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
29
+ journal={arXiv preprint arXiv:2203.07259},
30
+ year={2022}
31
+ }
32
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7467f2de0ab8ac3c72273c4c5cb5df0ddc961fe5c6456e9228efb07789739f0
3
+ size 451
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2704bc721fcbf2de9d2c11b30e4ff1eff278a451bf5dbb409565c70e5060348
3
+ size 744
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bea6a8fa0ce7f373401cce4470267c46c0c2280716a72273c87dc4fc51c7478
3
+ size 271
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51d19e3d7fe816b721c33bea4685fec1664a122456cde87ce3a359e8ea5c9836
3
+ size 438144939
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d241a60d5e8f04cc1b2b3e9ef7a4921b27bf526d9f6050ab90f9267a1f9e5c66
3
+ size 711396
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:259e5c3a17b141627b8eb35724fc23e9a1ff443b86c9c4dc5530bcd8b13017e6
3
+ size 396
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34be823ba7abbd419d80dba79324542b8a439e89b0e74ae079c690f7a9036c0b
3
+ size 200
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:972caa92ab2f2f67086bf480138d481d10c01c09ecd28bfab834e603e5a8a831
3
+ size 137979
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b4e8605b33128ba3ebb5c135e60191ff4abf5f99c83c87ad09807fafabd270a
3
+ size 3247
vocab.txt ADDED
The diff for this file is too large to render. See raw diff