Edit model card

oBERT-12-downstream-pruned-unstructured-80-qqp

This model is obtained with The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.

It corresponds to the model presented in the Table 1 - 30 Epochs - oBERT - QQP 80%.

Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 80%
Number of layers: 12

The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with (*)):

| oBERT 80%    | acc   | F1    |
| ------------ | ----- | ----- |
| seed=42   (*)| 91.66 | 88.72 |
| seed=3407    | 91.51 | 88.56 |
| seed=54321   | 91.54 | 88.60 |
| ------------ | ----- | ----- |
| mean         | 91.57 | 88.63 |
| stdev        | 0.079 | 0.083 |

Code: https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT

If you find the model useful, please consider citing our work.

Citation info

@article{kurtic2022optimal,
  title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
  author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
  journal={arXiv preprint arXiv:2203.07259},
  year={2022}
}
Downloads last month
5
Unable to determine this model’s pipeline type. Check the docs .