File size: 1,560 Bytes
5606d41
 
 
 
 
 
 
 
 
 
d0e0bd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cce3650
d0e0bd2
cce3650
 
 
d0e0bd2
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-downstream-pruned-unstructured-80-qqp

This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).


It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 80%`.

```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 80%
Number of layers: 12
```

The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):

```
| oBERT 80%    | acc   | F1    |
| ------------ | ----- | ----- |
| seed=42   (*)| 91.66 | 88.72 |
| seed=3407    | 91.51 | 88.56 |
| seed=54321   | 91.54 | 88.60 |
| ------------ | ----- | ----- |
| mean         | 91.57 | 88.63 |
| stdev        | 0.079 | 0.083 |
```

Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)

If you find the model useful, please consider citing our work.

## Citation info
```bibtex
@article{kurtic2022optimal,
  title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
  author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
  journal={arXiv preprint arXiv:2203.07259},
  year={2022}
}
```