File size: 1,378 Bytes
0afbdf1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2

This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).


It corresponds to the model presented in the `Table 2 - oBERT - QQP 90%` (in the upcoming updated version of the paper).

```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```

The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):

```
| oBERT 90%     | acc   | F1    |
| ------------- | ----- | ----- |
| seed=42       | 90.94 | 87.79 |
| seed=3407     | 91.00 | 87.81 |
| seed=123      | 90.94 | 87.73 |
| seed=12345 (*)| 91.07 | 87.92 |
| ------------- | ----- | ----- |
| mean          | 90.99 | 87.81 |
| stdev         | 0.061 | 0.079 |
```

Code: _coming soon_

## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
  title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
  author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
  journal={arXiv preprint arXiv:2203.07259},
  year={2022}
}
```