ekurtic commited on
Commit
e5e0edf
1 Parent(s): f2abef2

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-downstream-pruned-unstructured-97-squadv1
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - SQuADv1 97%`.
7
+
8
+ ```
9
+ Pruning method: oBERT downstream unstructured
10
+ Paper: https://arxiv.org/abs/2203.07259
11
+ Dataset: SQuADv1
12
+ Sparsity: 97%
13
+ Number of layers: 12
14
+ ```
15
+
16
+ The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
17
+
18
+ ```
19
+ | oBERT 97% | F1 | EM |
20
+ | ------------ | ----- | ----- |
21
+ | seed=42 (*)| 86.06 | 78.28 |
22
+ | seed=3407 | 86.04 | 78.12 |
23
+ | seed=54321 | 85.85 | 77.93 |
24
+ | ------------ | ----- | ----- |
25
+ | mean | 85.98 | 78.11 |
26
+ | stdev | 0.115 | 0.175 |
27
+ ```
28
+
29
+ Code: _coming soon_
30
+
31
+ ## BibTeX entry and citation info
32
+ ```bibtex
33
+ @article{kurtic2022optimal,
34
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
35
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
36
+ journal={arXiv preprint arXiv:2203.07259},
37
+ year={2022}
38
+ }
39
+ ```
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8486603b13aa568cb30a3f12f76029a3365344a234ace7353010ea02ea15338
3
+ size 659
eval_results.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ exact_match = 78.27814569536424
2
+ f1 = 86.06284342742502
3
+ epoch = 30.0
nbest_predictions.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38ca9c31f21d35ebdee8ad15dbb90036b5dfa3f1bea09977dbc92f1ccd7e3e99
3
+ size 45808357
predictions.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fa81b3a3a46d065d0fda693124bf21e6e12eca003bd7331969a0d3cd6ccb694
3
+ size 588547
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c41219939126ae2417f3eabc2bbcbb10187cd250e263f90c83ffb075b17bb963
3
+ size 435661303
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a863c20bb9664ba983f10e20d34c790e0eea92f165fc4716c4bad62f6bdc70b4
3
+ size 285
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab8ad900c72c6d0018dcbaa58356e0781eca1f5beacec423911aac6dca126833
3
+ size 2415
vocab.txt ADDED
The diff for this file is too large to render. See raw diff