File size: 1,129 Bytes
d930191
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# oBERT-12-downstream-dense-squadv1

This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).

It corresponds to the model presented in the `Table 3 - 12 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-12-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-12-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-12-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-12-downstream-pruned-block4-90-squadv1`

SQuADv1 dev-set:
```
EM = 82.71
F1 = 89.48
```

## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
  title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
  author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
  journal={arXiv preprint arXiv:2203.07259},
  year={2022}
}
```