ekurtic commited on
Commit
0afbdf1
1 Parent(s): bfb82a0

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the model presented in the `Table 2 - oBERT - QQP 90%` (in the upcoming updated version of the paper).
7
+
8
+ ```
9
+ Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
10
+ Paper: https://arxiv.org/abs/2203.07259
11
+ Dataset: QQP
12
+ Sparsity: 90%
13
+ Number of layers: 12
14
+ ```
15
+
16
+ The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
17
+
18
+ ```
19
+ | oBERT 90% | acc | F1 |
20
+ | ------------- | ----- | ----- |
21
+ | seed=42 | 90.94 | 87.79 |
22
+ | seed=3407 | 91.00 | 87.81 |
23
+ | seed=123 | 90.94 | 87.73 |
24
+ | seed=12345 (*)| 91.07 | 87.92 |
25
+ | ------------- | ----- | ----- |
26
+ | mean | 90.99 | 87.81 |
27
+ | stdev | 0.061 | 0.079 |
28
+ ```
29
+
30
+ Code: _coming soon_
31
+
32
+ ## BibTeX entry and citation info
33
+ ```bibtex
34
+ @article{kurtic2022optimal,
35
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
36
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
37
+ journal={arXiv preprint arXiv:2203.07259},
38
+ year={2022}
39
+ }
40
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ee56b99e39b0d00d6fe5cb77efc42e9edf424478615b6c6fe403de1c0937532
3
+ size 492
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f52977ab7da9222e25268e3371e413a97b34029216bdf90061c9c20c661e78bf
3
+ size 987
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:568bb3f1b27be3ba050a8d44aedce066317b506c8cc2b4fad95fcecc9f74817d
3
+ size 313
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbb105f65c927e1ac6d7347e3a95e70b9d68aa357ce3a82733b5e20deea86451
3
+ size 438022125
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da0e79933b9ed51798a3ae27893d3c5fa4a201126cef75586296df9b4d2c62a0
3
+ size 711661
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf09e5fe961d8190dafe6bc12d206b801ee0ccccdc7278d0923ecbe72b6677a8
3
+ size 391
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e93e2a1359f188738c58bc6b9ac602542eb8e0cd4de443fcf66fbb380ba355ad
3
+ size 199
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1221968e0b0705dd9ca523a49827b551562ff1e341fc812f0ac75fb2ffa4fc4e
3
+ size 25390
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f7b29e7be6c689be93861d449fa25bd88610fe1aa4c504621378f5f812ce75b
3
+ size 3375
vocab.txt ADDED
The diff for this file is too large to render. See raw diff