--- tags: - bert - oBERT - sparsity - pruning - compression language: en datasets: qqp --- # oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2 This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259). It corresponds to the model presented in the `Table 2 - oBERT - QQP 97%` (in the upcoming updated version of the paper). ``` Pruning method: oBERT upstream unstructured + sparse-transfer to downstream Paper: https://arxiv.org/abs/2203.07259 Dataset: QQP Sparsity: 97% Number of layers: 12 ``` The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`): ``` | oBERT 97% | acc | F1 | | ------------ | ----- | ----- | | seed=42 (*)| 90.42 | 87.09 | | seed=3407 | 90.31 | 86.87 | | seed=123 | 90.20 | 86.76 | | seed=12345 | 90.39 | 87.16 | | ------------ | ----- | ----- | | mean | 90.33 | 86.97 | | stdev | 0.098 | 0.186 | ``` Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT) If you find the model useful, please consider citing our work. ## Citation info ```bibtex @article{kurtic2022optimal, title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models}, author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan}, journal={arXiv preprint arXiv:2203.07259}, year={2022} } ```