ekurtic commited on
Commit
b592c1e
1 Parent(s): d1ca19f

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-3-downstream-dense-QAT-squadv1
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+ It corresponds to the model presented in the `Table 3 - 3 Layers - 0% Sparsity - QAT`, and it represents an upper bound for performance of the corresponding pruned and quantized models:
6
+ - 80% unstructured QAT: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-QAT-squadv1`
7
+ - 80% block-4 QAT: `neuralmagic/oBERT-3-downstream-pruned-block4-80-QAT-squadv1`
8
+ - 90% unstructured QAT: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-QAT-squadv1`
9
+ - 90% block-4 QAT: `neuralmagic/oBERT-3-downstream-pruned-block4-90-QAT-squadv1`
10
+
11
+ SQuADv1 dev-set:
12
+ ```
13
+ EM = 76.06
14
+ F1 = 84.25
15
+ ```
16
+
17
+ Code: _coming soon_
18
+
19
+ ## BibTeX entry and citation info
20
+ ```bibtex
21
+ @article{kurtic2022optimal,
22
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
23
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
24
+ journal={arXiv preprint arXiv:2203.07259},
25
+ year={2022}
26
+ }
27
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7a3a3bb264477e37f982e9638032e3774b516be2bfa380ad191bd049008a496
3
+ size 251
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4641c6f957378584d77188cc7e8a6d5d786b0a2f79e3ece5d21661f089938f35
3
+ size 669
eval_nbest_predictions.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d6303c3b69eebb74dc3091dd82a2cb22279d61304063586452e31dfa9d827d4
3
+ size 49414307
eval_predictions.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f07329d8cea295a6ab2854449b500d58c7373ba3b46063317c753da38287e262
3
+ size 599075
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc85fe97da7484e9e386f97810305c37091c2d88d4e7a7d4304958bbf4333e65
3
+ size 113
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d374dcc9f2f288c9356474054a0d8c8fd3245413e622598a149851081bef76b
3
+ size 180695361
recipe.yaml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !LayerPruningModifier
2
+ end_epoch: -1.0
3
+ layers: ['bert.encoder.layer.3', 'bert.encoder.layer.4', 'bert.encoder.layer.5', 'bert.encoder.layer.6', 'bert.encoder.layer.7', 'bert.encoder.layer.8', 'bert.encoder.layer.9', 'bert.encoder.layer.10', 'bert.encoder.layer.11']
4
+ start_epoch: -1.0
5
+ update_frequency: -1.0
6
+
7
+ !QuantizationModifier
8
+ disable_quantization_observer_epoch: 5.0
9
+ end_epoch: -1.0
10
+ freeze_bn_stats_epoch: 5.0
11
+ quantize_embeddings: 1
12
+ start_epoch: 0.0
13
+ submodules: ['bert.encoder', 'bert.embeddings', 'qa_outputs']
14
+
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fd1c882abbd30517dced455a2c9768945ec726b96727927e4959348d9de550b
3
+ size 466081
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:297a2bab9748edd4a81b858ffc4503ad3b3429e2bfec19c235d7575ab3595d5b
3
+ size 383
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f007c0787a072d33a21396b2a75ae787829cb57588868399f9e762f2df95e637
3
+ size 159
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1e381b5e498469ade25ae3ed6ae4962ca2c0f935cb3eae09d7526c448a5419b
3
+ size 7329
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1dfba59f0b58c44adaf6e6f43c7e7009479949cec4e394dc6ae7f376f8e6fba
3
+ size 2607
vocab.txt ADDED
The diff for this file is too large to render. See raw diff