ekurtic commited on
Commit
d9a8a10
1 Parent(s): dbc3622

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-upstream-pruned-unstructured-97-v2
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 97%` (in the upcoming updated version of the paper).
7
+
8
+ Finetuned versions of this model for each downstream task are:
9
+
10
+ - SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2`
11
+ - MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2`
12
+ - QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2`
13
+
14
+ ```
15
+ Pruning method: oBERT upstream unstructured
16
+ Paper: https://arxiv.org/abs/2203.07259
17
+ Dataset: BookCorpus and English Wikipedia
18
+ Sparsity: 97%
19
+ Number of layers: 12
20
+ ```
21
+
22
+ Code: _coming soon_
23
+
24
+ ## BibTeX entry and citation info
25
+ ```bibtex
26
+ @article{kurtic2022optimal,
27
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
28
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
29
+ journal={arXiv preprint arXiv:2203.07259},
30
+ year={2022}
31
+ }
32
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90938f273ec66fcdb7029502a666daa7378411018e2370936bdf196d90fe7615
3
+ size 447
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2704bc721fcbf2de9d2c11b30e4ff1eff278a451bf5dbb409565c70e5060348
3
+ size 744
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9833be78407f4fedb64dee8181537a1e6f680a801675922141d244209810cb5f
3
+ size 268
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f964e49c3d695fb5e389016e40b003a10603e7dde61377d6f41c6d94c60ae9b7
3
+ size 438144939
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d241a60d5e8f04cc1b2b3e9ef7a4921b27bf526d9f6050ab90f9267a1f9e5c66
3
+ size 711396
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:259e5c3a17b141627b8eb35724fc23e9a1ff443b86c9c4dc5530bcd8b13017e6
3
+ size 396
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f08893aefb89f1578835c24b6735af40daec7af9ef4f2f0ccbc44e857717ec0
3
+ size 199
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:280837c92b353341d27e31df2785aed960d1d0557b04934debe04c7736f33b69
3
+ size 137951
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d207f2ec64116861e5ba33cf8e73a4d1ec9fe144544e8e01092055fa874a4d18
3
+ size 3247
vocab.txt ADDED
The diff for this file is too large to render. See raw diff